×
Register Here to Apply for Jobs or Post Jobs. X

Director, Data Engineering

Job in Llanelli, Carmarthenshire, SA15, Wales, UK
Listing for: Kroll
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below

Get AI-powered advice on this job and more exclusive features.

At Kroll, our purpose is to bring clarity to complexity. As Director of Data Engineering, you will lead the enterprise data engineering strategy — designing, scaling, and governing the platforms and pipelines that power analytics, AI, and business intelligence. Reporting to the Chief Data and AI Officer, this role drives modernization of Kroll’s data ecosystem, ensuring that infrastructure, architecture, and operations enable trusted, real-time, and actionable insight across all business units.

Day-to-Day

Responsibilities
  • Strategic Leadership
    • Define and execute Kroll’s Data Engineering strategy and roadmap in alignment with the Data & AI vision. Translate enterprise goals into measurable data platform capabilities.
    • Champion modernization of data engineering practices including automation, observability, and scalability.
    • Partner with peers in Reporting, Analytics, AI, Technology, Infosec, corporate functions, and business units to deliver a unified and extensible data foundation.
  • Architecture & Governance
    • Lead the design and operation of the enterprise data platform architecture across ingestion, transformation, and serving layers.
    • Define and enforce standards for data security, lineage, metadata, and quality.
    • Collaborate to operationalize compliance, privacy, and risk management.
    • Drive adoption of modern architectural patterns such as data mesh, lakehouse, and event-driven pipelines.
  • Technology & Delivery
    • Oversee end-to-end data delivery: ingestion, transformation, orchestration, and consumption pipelines.
    • Implement CI/CD, Git Ops, and Infrastructure-as-Code to streamline data deployment.
    • Ensure data systems meet reliability and performance SLAs through monitoring and proactive capacity planning.
    • Collaborate with product and analytics leads to enable reusable, certified data products that fuel AI and insight.
  • People & Capability Development
    • Build and mentor a global team of senior data engineers, architects, and platform engineers.
    • Establish engineering excellence and cross-functional collaboration with analytics and product teams.
    • Promote a culture of technical rigor, transparency, and continuous learning.
  • Technical Expertise
    • Enterprise Data Architecture & Cloud Ecosystem
      • Deep experience architecting and managing modern data ecosystems across Azure and Databricks, with working knowledge AWS.
      • Expertise in designing and governing Lakehouse and Medallion architectures to unify structured, semi-structured, and unstructured data at scale.
      • Hands‑on understanding of data fabric, data mesh, and domain‑oriented architecture models. Strong command of cloud infrastructure fundamentals — compute, storage, networking, cost optimization, and security.
    • Engineering Leadership & Development Standards
      • Proven ability to design, build, and oversee ETL/ELT pipelines and data services using Chainsys, Azure Data Factory, Airflow, Databricks, and Delta Lake.
      • Advanced proficiency in Python and the Spark ecosystem (PySpark, Spark SQL), with demonstrated capability to set and enforce best practices.
      • Skilled in object‑oriented and functional programming, asynchronous processing, and hybrid batch/streaming architectures.
      • Deep knowledge of API‑driven data integration, SDK development, and API lifecycle management.
    • Data Quality, Governance & Observability
      • Experience operationalizing metadata management, data cataloging, and lineage tracking using Azure Purview, or equivalent.
      • Skilled in defining and enforcing data quality, reliability, and compliance frameworks.
      • Hands‑on knowledge of observability practices — monitoring, alerting, and incident response with Prometheus, Grafana, or Datadog, or equivalent.
    • Performance, Optimization & Scalability
      • Expertise in SQL/Spark query tuning, data pipeline optimization, and distributed system performance engineering.
      • Experienced in scaling fault‑tolerant pipelines for petabyte‑level workloads ensuring high availability.
      • Knowledge of containerization (Docker, Kubernetes) and applying CI/CD and Dev Ops pipelines to data workflows.
    • Tooling & Automation
      • Familiarity with modern frameworks such as FastAPI, Pydantic, Polars, and Pandas.
      • Experience automating data…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary