• Location: Mumbai
  • Experience: 1 to 3 yrs
  • Technologies / Skills : Advanced SQL, ETL pipelines development and optimization, Data Modelling, Spark Python, Airflow, Flink, Trino, AKS, Azure, AWS


Job Overview

We are seeking a motivated and technically skilled Data Engineer with 1-3 years of experience to join our team. You will be responsible for designing, building, and optimizing data solutions that support analytics and real-time processing needs. This role offers the opportunity to work on large-scale data engineering projects, initially leveraging Azure for a leading financial institution, with future flexibility to work across cloud platforms like AWS or GCP whenever required. Core areas of focus include advanced SQL, ETL pipelines, data modelling, and distributed computing frameworks such as Spark and Flink.


Responsibilities

  • Working closely with business teams to understand their requirements and develop reporting or analytic solutions.
  • Producing and automating delivery of key metrics and KPIs to the business. In some cases, this will focus on making data available and others will be developing full reports for end users.
  • Managing stakeholder demand and backlog of reporting requirements and needs.
  • Actively working with users to understand data issues, tracing back data lineage and helping the business put appropriate data cleansing and quality processes in place.
  • Managing secure access to data sets for the business.


Education level : Bachelor’s degree (B.E. / B. Tech) in Computer Science or equivalent from reputed institute.

Experience : 1 to 3 years hands-one as a Data Engineer. Financial services industry experience or familiarity with the stock exchange/capital market domain will be an added advantage.


Required Technical Skills

  • Advanced SQL and Python knowledge is a must. Data-Modelling and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • ETL Pipelines: Design, develop, and optimize ETL/ELT pipelines for batch and streaming data using Python, Airflow, Spark, and Flink.
  • Data Modelling: Build efficient data models and optimize advanced SQL queries for analytics and reporting.
  • Cloud Engineering: Implement scalable data solutions on Azure (AKS, Data Lake, Data Factory) and AWS (S3, Glue, Athena).
  • Distributed Computing: Use Spark for large-scale data processing and Flink for real-time streaming.
  • Query Optimisation: Leverage Trino for distributed SQL querying over large datasets.
  • Automation & Orchestration: Manage data workflows and scheduling with Airflow.
  • Scalability: Deploy and manage containerized applications using Docker and Kubernetes.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of Git workflow, Test-case driven development and using CICD is good to have.


About Oneture Technologies

Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions—from ideation, project inception, planning through deployment to ongoing support and maintenance. Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.