Data Engineer
Listed on 2026-03-08
-
Software Development
Data Engineer
Wipro’sdynamicapproachtopeople,process,andtechnologyhasledthemtobeanindustryleaderfordecades.
Additionally,since
2006,Wiprohashelpedcompaniespowertheirbusinesswiththecloud.
Weprovideprofessionalservicesthathelpenterprisesmovefaster,rethinkprocessesandchangethewaytheiremployeeswork.
We are seeking a Data Engineer with strong hands‑on expertise in Python, PySpark, Apache Airflow, and AWS to design, build, and optimize scalable, cloud‑native data pipelines. The role involves working with large‑scale batch and streaming data, implementing robust ETL frameworks, and ensuring data quality, reliability, and performance across analytics and downstream consumption layers.
Key Responsibilities- Design, develop, and maintain scalable ETL / ELT pipelines using Python and Py Spark on AWS
- Orchestrate batch and incremental workflows using Apache Airflow (DAG design, scheduling, retries, dependencies)
- Build and optimize data pipelines leveraging AWS services such as S3, EC2, Glue, Lambda, RDS, EMR
- Implement data ingestion from multiple structured and semi‑structured sources (RDBMS, APIs, files, streams)
- Optimize PySpark jobs using partitioning, caching, joins, broadcast variables, and performance tuning techniques
- Ensure data quality through validation rules, schema enforcement, error handling, and reconciliation checks
- Implement CI/CD pipelines for data workflows using Git, Jenkins / AWS Code Pipeline
, and automated testing - Monitor data pipelines, troubleshoot failures, and resolve performance bottlenecks in production environments
- Collaborate with Data Analysts, BI teams, Data Scientists, and Architects to deliver analytics‑ready datasets
- Follow Agile/Scrum practices
, participate in code reviews, and contribute to design and architecture discussions
- Strong programming experience in Python
- Hands‑on expertise with PySpark / Spark SQL
- Proven experience in Apache Airflow for workflow orchestration
- Solid experience with AWS Cloud (S3, EC2, Glue, Lambda, EMR, RDS)
- Strong understanding of ETL / ELT concepts and data pipeline design
- Experience working with large‑scale datasets (batch and/or streaming)
- Proficiency with Git version control and CI/CD pipelines
- Good understanding of data warehousing concepts (fact/dimension, star schema, SCD)
AtWiprowedon’tjustlookatyour
CV.We’remorefocusedonwhoyouareandyourpotential.
Wealsoknowthateveryonehasalifeoutsidework,sowe’rehappytodiscussflexibleworking.
Andwe’lldoeverythingwecantosupportyouduringyourapplication.
Ifyouneedustomakeanyadjustmentstoourrecruitmentprocess,speaktoourtalentacquisitionteamwhowillbehappytosupportyou.
With Wiproyouwillreceiveacompetitivesalary,agenerousbenefitspackageandtraining&developmentinareastohelpyouimprove.
Wiproisan Equal Employment Opportunityemployerandmakesallemploymentandemployment -relateddecisionswithoutregardtoaperson'srace,sex,national origin,ancestry,disability,sexual orientation,oranyotherstatusprotectedbyapplicablelaw.
Whywait?Applynowtobuildanamazingcareerandbepartofabrilliantteam.
Wecan’twaittohearfromyou.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: