Job Description & How to Apply Below
Permanent
Role:
Data Engineer
Experience:
5
-7 yrs
Location:
Gurugram
Shift: 2 pm
-11 pm
Domain- cross domain(Insurance, Investment, Financial services)
Role Overview
We are seeking a skilled Data Engineer with 5–7 years of experience to design, build, and maintain
scalable data pipelines and data warehouse solutions on AWS, with a strong focus on Amazon
Redshift. The role requires hands-on expertise in SQL and Python, exposure to Snowflake, and the
ability to work across multiple business domains.
The candidate will collaborate closely with data architects, analysts, and business stakeholders to
deliver reliable, high-performance data solutions that support analytics and reporting needs.
Key Responsibilities
• Design, develop, and maintain batch and near-real-time data pipelines using AWS
services.
• Build and optimize data ingestion and transformation workflows feeding Amazon
Redshift.
• Implement scalable ETL/ELT frameworks using Python and SQL.
• Support data integration from multiple source systems across domains.
• Develop and maintain Redshift data models, tables, and views.
• Optimize Redshift performance using appropriate distribution styles, sort keys, and
query tuning.
• Support data validation, reconciliation, and quality checks.
• Work within the AWS ecosystem (S3, Glue, Redshift, Lambda, IAM, etc.).
• Support and integrate with Snowflake for analytics or downstream consumption.
• Ensure security, scalability, and cost efficiency of data solutions.
• Work closely with Data Architects and Data Modelers to implement approved designs.
• Partner with analysts and business teams to understand data requirements.
Support production issues, root-cause analysis, and continuous improvements
Required Skills & Experience
• 5–7 years of hands-on experience in data engineering or data warehousing roles.
• Experience working across multiple business domains (cross-domain exposure).
• Strong expertise in Amazon Redshift.
• Advanced SQL skills for complex transformations and analytics.
• Proficiency in Python for ETL, data processing, and automation.
• Working knowledge of Snowflake (data modeling, querying, or integrations).
• Solid understanding of AWS data services.
• Strong understanding of ETL/ELT patterns and data pipeline design.
• Experience with data quality, monitoring, and error handling.
• Familiarity with dimensional and analytical data models.
• Strong problem-solving and analytical skills.
• Good communication and stakeholder collaboration abilities.
• Ability to work independently and in team-based delivery models.
Nice to Have
• Experience with orchestration tools (e.g., Airflow, AWS Step Functions).
• Exposure to CI/CD for data pipelines.
• Prior experience in financial services or regulated environments
Interested Candidate can share their CV at c
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×