Senior Data Engineer; AWS, Airflow, Python
Job in
Milton Keynes, Buckinghamshire, MK1, England, UK
Listed on 2026-01-30
Listing for:
Triad
Full Time
position Listed on 2026-01-30
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing, Data Science Manager, Data Analyst
Job Description & How to Apply Below
Based at client locations, working remotely, or based in our Godalming or Milton Keynes offices.
Salary up to £65k plus company benefits.
About Us
Triad Group Plc is an award-winning digital, data, and solutions consultancy with over 35 years' experience primarily serving the UK public sector and central government. We deliver high-quality solutions that make a real difference to users, citizens and consumers.
At Triad, collaboration thrives, knowledge is shared, and every voice matters. Our close-knit, supportive culture ensures you're valued from day one. Whether working with cutting-edge technology or shaping strategy for national-scale projects, you'll be trusted, challenged, and empowered to grow.
We nurture learning through communities of practice and encourage creativity, autonomy, and innovation. If you're passionate about solving meaningful problems with smart and passionate people, Triad could be the place for you.
Glassdoor score of 4.7
96% of our staff would recommend Triad to a friend
100% CEO approval
See for yourself some of the work that makes us all so proud:
Helping law enforcement with secure intelligence systems that keep the UK safe
Supporting the UK's national meteorological service in leveraging supercomputers for next-level weather forecasting
Assisting a UK government department responsible for consumer product safety with systems to track unsafe products
Powering systems that help the government monitor and reduce greenhouse gas emissions from commercial transport
Role Summary
Triad is seeking a Senior Data Engineer to play a key role in delivering high-quality data solutions across a range of client assignments, primarily within the UK public sector. You will design, build, and optimise cloud-based data platforms, working closely with multidisciplinary teams to understand data requirements and deliver scalable, reliable, and secure data pipelines.
This role offers the opportunity to shape data architecture, influence technical decisions, and contribute to meaningful, data-driven outcomes.
Key Responsibilities
Design, develop, and maintain scalable data pipelines to extract, transform, and load (ETL) data into cloud-based data platforms, primarily AWS.
Create and manage data models that support efficient storage, retrieval, and analysis of data.
Utilise AWS services such as S3, EC2, Glue, Aurora, Redshift, Dynamo
DB and Lambda to architect and maintain cloud data solutions.
Maintain modular Terraform based IaC for reliable provisioning of AWS infrastructure.
Develop, optimise and maintain robust data pipelines using Apache Airflow.
Implement data transformation processes using Python to clean, preprocess, and enrich data for analytical use.
Collaborate with data analysts, data scientists, developers, and other stakeholders to understand and integrate data requirements.
Monitor, optimise, and tune data pipelines to ensure performance, reliability, and scalability.
Identify data quality issues and implement data validation and cleansing processes.
Maintain clear and comprehensive documentation covering data pipelines, models, and best practices.
Work within a continuous integration environment with automated builds, deployments, and testing.
Skills and Experience
* Strong experience designing and building data pipelines on cloud platforms, particularly AWS.
* Excellent proficiency in developing ETL processes and data transformation workflows.
* Strong SQL skills (postgresql) and advanced Python coding capability (essential).
* Experience working with AWS services such as S3, EC2, Glue, Aurora, Redshift, Dynamo
DB and Lambda (essential).
* Understanding of Terraform codebases to create and manage AWS infrastructure.
* Experience developing, optimising, and maintaining data pipelines using Apache Airflow.
* Familiarity with distributed data processing systems such as Spark or Databricks.
* Experience working with high-performing, low-latency, or large-volume data systems.
* Ability to collaborate effectively within cross-functional, agile, delivery-focused teams.
* Experience defining data models, metadata, and data dictionaries to ensure…
Position Requirements
10+ Years
work experience Additional Information / Benefits
Company benefits
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×