Job Description & How to Apply Below
We are looking for a hands-on Data Engineer to design, build, and maintain scalable data pipelines and data platforms. You will work on ingesting, transforming, and serving data reliably for analytics, reporting, and downstream applications, collaborating closely with backend engineers, analysts, and data scientists. This role is ideal for someone who enjoys building robust data systems, working with large datasets, and writing clean, production-grade code.
Key Responsibilities
Data Pipelines & Development
● Build and maintain reliable ETL/ELT pipelines for batch and near-real-time data processing.
● Ingest data from multiple sources (databases, APIs, event streams, files).
● Transform raw data into clean, analytics-ready datasets.
● Optimize pipelines for performance, scalability, and cost.
Data Storage & Modeling
● Design and manage data models in data warehouses or data lakes.
● Work with SQL and No
SQL databases and modern data warehouses.
● Implement partitioning, indexing, and efficient query patterns.
● Maintain documentation for schemas, pipelines, and transformations.
Cloud & Tooling
● Build data solutions on cloud platforms (AWS preferred).
● Use services such as S3, Redshift, Athena, Glue, EMR, Lambda, Kinesis, or equivalents.
● Work with orchestration tools like Airflow or similar schedulers.
● Use version control, CI/CD, and Infrastructure-as-Code where applicable.
Data Quality & Reliability
● Implement data validation, monitoring, and alerting for pipelines.
● Troubleshoot data issues and ensure pipeline reliability.
● Collaborate with stakeholders to resolve data discrepancies.
What we’re looking for:
● 4 to 6+ years of experience as a Data Engineer / Data Developer.
● Strong programming skills in Python.
● Excellent knowledge of SQL and relational data modeling.
● Experience building ETL/ELT pipelines in production.
● Hands-on experience with cloud-based data platforms (AWS preferred).
● Understanding of data warehousing concepts and best practices.
Nice to have:
● Experience with Spark, Kafka, dbt, or Flink.
● Familiarity with orchestration tools like Airflow.
● Experience with streaming or event-driven data pipelines.
● Exposure to data quality or observability tools.
● Experience working with large-scale or high-volume datasets.
Here are answers to some questions you may have
Where is your office?
Chennai (Velachery)
Work Model
Work from Office – because great stories are built in person!
Do you have an online presence?
(Use the "Apply for this Job" box below). (we are @Amura Health on all social media)
For better insights:
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×