More jobs:
Job Description & How to Apply Below
Inviting applications for the role of Assistant Manager, Data Engineer - Informatica to AWS Glue Migration
Job Summary:
We are seeking an experienced Data Engineer whose primary responsibility is to design and develop ETL jobs that load data from FTP/SFTP locations into Amazon S3 and subsequently transform and load data from S3 into on-premises SQL Server databases using AWS Glue . The role also involves migrating existing Informatica Power Center workflows to AWS Glue while ensuring secure, reliable, and high-quality data processing.
Responsibilities :
Plan and migrate existing Informatica workflows to AWS Glue (PySpark/Scala).
Load flat files from FTP/SFTP into Amazon S3 with proper schema, partitioning, and data quality checks.
Build and optimize AWS Glue Jobs, Crawlers, and Data Catalog tables with error handling and retries.
Orchestrate ETL workflows using AWS Glue Workflows, Step Functions, Event Bridge, or Glue Triggers .
Connect AWS pipelines securely to on-prem SQL Server using JDBC, VPN/Direct Connect, and Secrets Manager.
Design and optimize ETL to SQL Server , including upserts , bulk loads, and performance tuning.
Implement data quality checks, logging, monitoring, and reconciliation using Cloud Watch and Glue metrics.
Optimize jobs for performance and cost efficiency .
Create technical documentation and support UAT, cutover, and hypercare .
Collaborate with security/network teams for IAM, VPC, subnets, security groups, and key management .
Qualifications we seek in you!
Minimum Qualifications
Good years of experience in ETL / Data Engineering with hands-on experience in AWS Glue (PySpark) .
Strong SQL skills with experience working on SQL Server .
Experience migrating Informatica Power Center (or similar ETL tools) to AWS Glue .
Bachelor%27s degree in computer science, Engineering, or related field.
Experience loading data from FTP/SFTP sources into Amazon S3 .
Hands-on experience with AWS Glue Jobs, Crawlers, Data Catalog , and Cloud Watch .
Experience connecting AWS pipelines to on-premises databases using JDBC and secure credentials.
Good understanding of data validation, error handling, and data modeling concepts.
Strong documentation and communication skills .
Preferred Qualifications /
Skills:
Experience with AWS Step Functions, Lambda, or Event Bridge .
Exposure to AWS DMS, Athena, or Lake Formation .
Familiarity with CI/CD and Infrastructure as Code tools.
Experience with PySpark performance tuning .
Exposure to data quality or monitoring tools .
Experience working in regulated or compliance-driven environments .
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×