Senior DataOps Engineer
Listed on 2025-12-10
-
IT/Tech
Data Engineer, Cloud Computing
3 days ago Be among the first 25 applicants
This range is provided by Harnham. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay rangeDirect message the job poster from Harnham
Recruitment Consultant - Engineering & Governance (North and Midlands) at HarnhamBased:
Leeds - hybrid
Salary: up to £62,000
I'm partnered with an established FS company who are scaling their cloud-native data platform and building a modern, centralised data function to better support a wide community of Data Scientists, Analysts, and federated Data Engineers across the organisation.
They are looking for a Senior Data Ops Engineer to help shape how data pipelines are run, monitored, governed, and optimised at scale.
This is an opportunity to join a growing team working at the heart of the organisation’s data transformation, improving platform efficiency, enabling self-service, and ensuring data pipelines operate with the same discipline as production software.
The RoleAs a Senior Data Ops Engineer, you’ll take a strategic, high-level view of the data platform while still diving deep when needed. You will focus on observability, automation, pipeline performance, operational excellence, and cloud cost optimisation.
You’ll work cross-functionally with Data Engineering, Dev Ops, and Fin Ops teams, helping ensure that data services are reliable, scalable, secure and cost-effective, and that federated teams across the organisation can self-serve with confidence.
What You’ll Be Doing- Taking an overview of how pipelines run across the platform, improving performance and throughput
- Enhancing observability and monitoring across Azure-based data workloads
- Identifying bottlenecks and opportunities to streamline operational processes
- Using scheduling/orchestration tools to optimise workflows and improve run times
- Treating data pipelines like production-grade software with robust monitoring, automation, and scalability in mind
- Supporting incident management and helping federated teams resolve issues efficiently
- Driving efficiency through automation and reduction of manual operational overhead
- Working with Fin Ops practices to optimise spend and evaluate cost-performance trade-offs
- Advocating for better platform usage, adoption, and operational best practices
- Strong experience working with Azure cloud platform
- Background in data engineering and building/maintaining data pipelines
- Experience with pipeline monitoring, observability, and incident troubleshooting
- Strong automation mindset and ability to build resilient, self-healing data workflows
- Knowledge of Fin Ops principles and cloud cost optimisation
- Experience with orchestration tools such as Azure Data Factory, Databricks Workflows, or Airflow
- Exposure to containerisation tools (Kubernetes, Docker)
- Experience with data cataloguing tools
- Familiarity with unit testing in data pipelines
- Awareness of MLOps practices
Mid‑Senior level
Employment typeFull‑time
#J-18808-LjbffrTo Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: