More jobs:
Senior Data Engineer
Job in
Glasgow, Glasgow City Area, G1, Scotland, UK
Listed on 2026-01-24
Listing for:
Head Resourcing
Full Time
position Listed on 2026-01-24
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
GLASGOW BASED 4 days
No sponsorship/ relocation provided sadl
Our incredibly successful client, consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse
.
They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines
, Unity Catalog
, and Azure Data Factory
, and this role sits right at the heart of that transformation.
This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care.
What You'll Be Doing- Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines
, Py Spark , and Spark SQL across a full Medallion Architecture (Bronze - Silver - Gold). - Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks
. - Apply Lakeflow expectations for data quality, schema validation and operational reliability.
- Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).
- Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets
. - Apply governance, lineage and fine-grained permissions via Unity Catalog
.
- Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory
. - Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.
- Build CI/CD pipelines in Azure Dev Ops for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.
- Ensure secure, enterprise-grade platform operation across Dev Prod
, using private endpoints, managed identities and Key Vault. - Contribute to platform standards, design patterns, code reviews and future roadmap.
- Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.
- Influence architecture decisions and uplift engineering maturity within a growing data function.
- Databricks
:
Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses - Languages
:
PySpark, Spark SQL, Python, Git - Analytics
:
Power BI, Fabric
- Significant commercial experience of Data Engineering with years delivering production workloads on Azure + Databricks
. - Strong PySpark/Spark SQL and distributed data processing expertise.
- Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.
- Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills.
- Comfort with Git, CI/CD, automated deployments and modern engineering standards.
- Clear communicator who can translate technical decisions into business outcomes.
- Streaming ingestion experience (Auto Loader, structured streaming, watermarking)
- Advanced Unity Catalog security (RLS, ABAC, PII governance)
- Terraform/Bicep for IaC
- Fabric Semantic Model / Direct Lake optimisation
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×