Support Engineer, LMAQ-DE
Listed on 2026-03-09
-
IT/Tech
Data Engineer
The Last Mile Org focuses on technology, products, and programs that enable the efficient, safe, and customer-friendly delivery of packages. The DE team within LMAQ builds and maintains the data ecosystem comprising scalable data infrastructure, data pipelines, datasets and tools for Geospatial, Hub, DTO and Safety Orgs. The BIE, Tech, Product and Program teams use this ecosystem to generate reporting, analyses and deep dives that create roadmaps for last mile products and processes.
The data engineering support role focuses on providing on‑call support, troubleshooting and investigating tickets, conducting root‑cause analyses, and improving operational health. They are also responsible for fixing issues, communicating with stakeholders, and proactively monitoring alarms and metrics to ensure the overall health of the services they support.
- Monitor and optimise Amazon Redshift clusters
Identify long‑running queries, optimise them to maintain cluster performance and ensure a healthy operational state. - Monitor data pipelines and ETL jobs
- Continuously monitor Glue, Airflow, Lambda, Redshift, Spark, EMR and Kinesis jobs.
- Identify failures, performance degradation or bottlenecks in real time.
- Diagnose and troubleshoot data pipeline failures
- Diagnose issues in extraction, transformation, loading, schema mismatches and data quality.
- Perform impact analysis and apply immediate fixes.
- Provide continuous support of existing data engineering products, tools, platforms and solutions built by DE, and extend them for new use cases.
- Handle on‑call/incident response
- Own the end‑to‑end on‑call rotation, respond to Pager Duty alerts and restore systems within SLA.
- Work directly with data engineering teams to resolve critical incidents.
- Conduct root‑cause analysis (RCA)
- Perform RCA for every major incident.
- Document findings and propose long‑term preventive solutions.
- Manage data quality and validation
- Validate accuracy, completeness, freshness, lineage and schema consistency.
- Optimize queries and performance
- Optimize inefficient SQL (Athena, Redshift, Presto, Spark).
- Tune warehouse performance, resolve WLM queue issues and reduce compute cost.
- Maintain metadata, catalogs and schemas
- Manage Glue catalog, partition refresh, schema evolution, table permissions and lineage.
- Ensure smooth integration between S3, Glue, Athena, Redshift and Lake Formation.
- Support deployments and release management
- Assist in promoting ETL jobs, model code, and pipeline configurations through CI/CD.
- Validate deployments and perform rollback when necessary.
- Collaborate with BI, product and stakeholders
- Work with BI teams, analysts, PMs and upstream/downstream owners.
- Provide data accessibility support and answer data troubleshooting queries.
- Maintain documentation and SOPs
- Maintain playbooks, runbooks, troubleshooting guides and data dictionaries.
- Ensure knowledge transfer and training for new team members.
- 2+ years of scripting language experience.
- Strong SQL and debugging skills.
- AWS (S3, Glue, EMR, Lambda, Redshift, Athena).
- Strong Python and PySpark skills.
- Understanding of data modeling, ETL and batch/streaming pipelines.
- Experience with version control and CI/CD (Git, Code Pipeline).
- Good communication for stakeholder‑facing troubleshooting.
- Good to have GenAI skillset, but not mandatory.
- Experience with AWS networking and operating systems.
Amazon is an equal‑opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).