More jobs:
Data DevOps Engineer
Job in
Watford, Hertfordshire, England, UK
Listed on 2025-12-01
Listing for:
Addition
Full Time
position Listed on 2025-12-01
Job specializations:
-
IT/Tech
Cloud Computing, Data Engineer
Job Description & How to Apply Below
Introduction
This organisation is transforming how millions of people engage with data, insight, and digital experiences. They’re scaling a modern BI function and need someone who can keep their cloud environments, pipelines, and releases running smoothly.
Role OverviewLocation: Hybrid Watford – 3 days on site
Package: Up to £85,000 + benefits
What You’ll Be Doing- Owning cloud integration across AWS for BI workloads, ensuring infrastructure is consistent, secure, and scalable.
- This role is brand new and it’s being introduced to bridge the gap between Dev Ops and Data with the data departments.
- Building and maintaining CI/CD pipelines that support ETL and reporting releases.
- Managing code promotion processes, version control standards, and Jira integrations.
- Overseeing non-production environments to ensure data freshness, alignment, and smooth testing.
- Orchestrating data provisioning, refreshes, and automated workflows for analytics teams.
- Optimising ETL and Power BI code to improve performance, efficiency, and reliability.
- Implementing observability and logging frameworks to monitor data services and deployments.
- Partnering with engineering, data, and reporting teams to coordinate releases and resolve technical challenges.
- Embedding security, governance, and compliance practices across AWS environments.
- Monitoring performance and cost usage, recommending improvements and efficiencies.
- Driving continuous improvement across pipelines, tooling, automation, and release processes.
- Strong hands‑on AWS experience across Redshift, S3, EMR, Lambda and infrastructure‑as‑code.
- Demonstrated ability to align with Dev Ops while maintaining distinct responsibilities within the BI function.
- Proficient with CI/CD tooling such as Jenkins or Git Hub actions.
- Advanced Python scripting and solid SQL capability.
- Experience with large‑scale data processing (Spark/EMR) and data warehousing concepts.
- Knowledge of Docker/Kubernetes and containerised deployment workflows.
- Familiarity with Jira integrations, release management, and environment refresh processes.
- Skilled in optimising ETL pipelines and Power BI models, DAX, and refresh strategies.
- Strong troubleshooting skills across cloud, data pipelines, and distributed systems.
- Experience with observability tools such as Cloud Watch, Datadog, Prometheus or Grafana.
- Comfortable working in agile environments with multiple concurrent release cycles.
- The chance to work on a major national‑scale transformation with significant technical scope.
- A forward‑thinking environment that embraces automation, innovation, and modern tooling.
- Supportive teams, strong cross‑functional collaboration, and room to influence best practice.
- Inclusive culture where your contribution directly supports meaningful social impact.
We’re easy to talk to. Start the conversation.
Equal OpportunityWe are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
#J-18808-LjbffrNote that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×