Azure Data Architect
Listed on 2025-12-07
-
IT/Tech
Data Engineer, Cloud Computing
2 days ago Be among the first 25 applicants
Get AI-powered advice on this job and more exclusive features.
Who We AreBorn digital, UST transforms lives through the power of technology. We walk alongside our clients and partners, embedding innovation and agility into everything they do. We help them create transformative experiences and human-centered solutions for a better world.
Role DescriptionA Senior Data Architect will lead the design and delivery of modern data platforms on Microsoft Azure. You’ll own end-to-end architectures centered on Azure Databricks, Snowflake on Azure, dbt, and Azure Data Factory (ADF) with supporting services like Azure Data Lake Storage Gen2 (ADLS), Azure Synapse/Fabric pipelines, Microsoft Purview, Azure Event Hubs, Azure Key Vault, and Azure Dev Ops.
Job Details- Seniority Level: Mid‑Senior level
- Employment Type:
Full‑time - Job Function:
Engineering and Information Technology - Industries: IT Services and IT Consulting
- Location:
Remote (West Coast, Seattle, WA preferred)
- Hands‑on architecture role for setting technical direction, mentoring engineers, and shipping scalable, reliable, and cost‑efficient data solutions.
- Partner with client stakeholders to translate business goals into scalable Azure data architectures and delivery roadmaps.
- Architect and implement lakehouse/data warehouse solutions using Azure Databricks (Delta Lake/Unity Catalog), Snowflake on Azure, dbt (Core/Cloud), and ADLS Gen2 following medallion patterns.
- Design robust ingestion & transformation pipelines with ADF and/or Fabric/Synapse pipelines; orchestrate Databricks jobs, Delta Live Tables, and dbt models for ELT.
- Establish best practices for performance & cost (cluster sizing, autoscaling, Photon/SQL warehouse, Snowflake virtual warehouses, caching, file layout, Z‑ordering) and drive observability with Azure Monitor/Log Analytics and Databricks metrics.
- Implement data governance & security with Microsoft Purview (catalog, lineage, policies), Unity Catalog, RBAC/ABAC, managed identities, Private Endpoints, VNet injection, and Key Vault–backed secrets.
- Lead code and design reviews; set standards for PySpark/SQL/dbt, testing (unit/integration), data quality (expectations/constraints), and CI/CD via Azure Dev Ops/Git Hub Actions.
- Guide multi‑domain programs across healthcare, retail, BFSI, or similar; mentor data engineers and ensure high‑quality deliverables.
- Evangelize solutions through clear documentation, reference architectures, and stakeholder presentations.
- Stay current on Azure data & AI services (Databricks, Snowflake features, Fabric, Purview) and pragmatically introduce improvements.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- At least 15 years in data engineering/architecture with substantial hands‑on depth in Azure.
- Expert in Azure Databricks & Apache Spark (PySpark/SQL), Delta Lake, Unity Catalog, job orchestration, and performance tuning.
- Strong Snowflake on Azure experience (schema design, performance/cost optimization, RBAC, streams/tasks).
- dbt expertise (project structure, tests, exposures, macros; Core or Cloud) and solid SQL/Python fundamentals.
- Proven track record building ETL/ELT pipelines with ADF and/or Synapse/Fabric pipelines; familiarity with Event Hubs/Kafka for streaming.
- Solid foundation in lakehouse and warehousing architectures, dimensional modeling, medallion patterns, and data quality frameworks.
- Security & governance know‑how:
Purview lineage/catalog, data masking, PII handling, managed identities, private networking. - Excellent communication skills; able to produce architecture docs and present to technical and business audiences.
- Consulting/agency experience and the ability to lead multi‑project portfolios; willingness to travel as needed.
- Experience with Azure ML, feature stores, or serving ML pipelines from Databricks/Snowflake.
- Infrastructure as Code (Terraform/Bicep) for Azure data platforms, containerization with Docker.
- Observability tools (Great Expectations/Delta Expectations, Databricks quality flows, Monte Carlo, Data Dog).
- Exposure to AWS or GCP is welcome but not required.
- Familiarity with AI/ML on…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).