More jobs:
Senior Data Lake Engineer
Job in
Dallas, Dallas County, Texas, 75215, USA
Listed on 2026-01-02
Listing for:
PETADATA
Full Time
position Listed on 2026-01-02
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
Overview
PETADATA is seeking a seasoned Senior Data Lake Engineer with over 15 years of experience in data engineering and a strong focus on building and managing AWS-native Data Lake solutions. The ideal candidate will have deep expertise with AWS Lake Formation, serverless data processing using Lambda and Python, and experience with AI-assisted development tools such as Amazon Q.
Responsibilities- Data Lake Architecture:
Design, build, and optimize scalable, secure data lakes using AWS Lake Formation and best practices for data governance, cataloging, and access control. - Serverless Development:
Build and deploy AWS Lambda functions using Python for real-time data processing, automation, and event-driven workflows. - ETL / ELT Pipelines:
Develop and maintain robust data pipelines using AWS Glue, integrating data from various structured and unstructured sources. - AI Tools for Development:
Leverage AI-powered coding tools (such as Amazon Q, Git Hub Copilot, or similar) to increase development speed, code quality, and automation. - Database Integration:
Design and implement integrations between the Data Lake and Dynamo
DB, optimizing for performance, scale, and consistency. - Security & Compliance:
Implement fine-grained access control using Lake Formation, IAM policies, encryption, and data masking techniques to meet enterprise and compliance standards (e.g., GDPR, HIPAA). - Monitoring & Optimization:
Implement logging, monitoring, and performance tuning for Glue jobs, Lambda functions, and data workflows. - Collaboration & Leadership:
Collaborate with cross-functional teams including data science, analytics, Dev Ops, and product teams. Provide mentorship and technical leadership to junior engineers.
- Experience:
15+ years in data engineering roles, with 5+ years focused on AWS-native data lake development. - Cloud Expertise:
Deep, hands-on expertise in AWS Lake Formation, Glue, Lambda, and Dynamo
DB. - Programming:
Proficient in Python, especially for serverless and data processing applications. - AI Coding Tools:
Experience using AI-assisted development tools (e.g., Amazon Q, Git Hub Copilot, AWS Code Whisperer). - Security:
Strong knowledge of data security practices in AWS, including IAM, encryption, and compliance standards. - Orchestration & Automation:
Experience with workflow orchestration tools such as Step Functions, Airflow, or custom AWS-based solutions. - Soft Skills:
Strong communication, problem-solving, and collaboration skills. Able to lead discussions on architecture and best practices.
- AWS Certifications (e.g., AWS Certified Data Analytics, AWS Certified Solutions Architect)
- Experience with Athena, Redshift, or other analytics services in the AWS ecosystem
- Exposure to Dev Ops practices and tools like Terraform, Cloud Formation, or CDK
- Familiarity with data cataloging and metadata management tools
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Application NotesPlease email your resume to
Note:
Candiates are required to attend phone/video calls and in-person interviews. After selection, the candidate should undergo all background checks on education and experience.
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×