More jobs:
Job Description & How to Apply Below
Incedo is a global AI and data transformation specialist empowering companies to realize sustainable business impact from their digital investments by delivering ROI from AI a long-term partner for strategy to execution, we operate at the intersection of business and technology. Our integrated services and platforms are built on the foundation of AI & Data, digital engineering, and operations transformation, bringing deep domain expertise and full stack capabilities together.
With over 4,000 people in the US, Canada, Latin America and India and a large, diverse portfolio of Fortune 500 enterprises and fast-growing clients worldwide, we work across banking & payments, wealth management, telecom, hi-tech and life sciences.
Role
Description:
We are seeking a skilled professional to maintain and support batch jobs in a legacy environment. The role involves managing and monitoring ETL processes, addressing issues, and enhancing existing PL/SQL scripts. The ideal candidate will have strong expertise in ETL, SQL Server, and data warehousing concepts, along with experience in troubleshooting and improving batch job performance.
Key Responsibilities:
Design and implement robust ETL pipelines using AWS Glue, Lambda, and S3.
Monitor and optimize the performance of data workflows and batch processing jobs.
Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks.
Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy.
Develop and maintain automated solutions for data transformation, migration, and integration tasks.
Implement best practices for data security, data governance, and compliance within AWS environments.
Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management.
Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes.
Required
Skills and Experience:
Strong experience of 4-10 years with Data Engineering practices.
Experience in AWS services, particularly AWS Glue, Lambda, S3, and other AWS data tools.
Proficiency in SQL, Python , Pyspark, Numpy etc. and experience in working with large-scale data sets.
Experience in designing and implementing ETL pipelines in cloud environments.
Expertise in troubleshooting and optimizing data processing workflows.
Familiarity with data warehousing concepts and cloud-native data architecture.
Knowledge of automation and orchestration tools in a cloud-based environment.
Strong problem-solving skills and the ability to debug and improve the performance of data jobs.
Excellent communication skills and the ability to work collaboratively with cross-functional teams.
Good to have knowledge of DBT & Snowflake
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or a related field.
Experience with other AWS data services like Redshift, Athena, or Kinesis.
Familiarity with Python or other scripting languages for data engineering tasks.
Experience with containerization and orchestration tools like Docker or Kubernetes.
Location:
Candidates should be based in Hyderabad/Gurgaon
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×