More jobs:
Job Description & How to Apply Below
Company Description
Threat
XIntel is a startup dedicated to delivering tailored, cost-effective cybersecurity solutions, empowering businesses to protect their digital assets. Specializing in cloud security, web and mobile security testing, Dev Sec Ops , and more, Threat
XIntel focuses on the unique challenges faced by organizations of all sizes. By proactively identifying vulnerabilities through continuous monitoring and testing, the company ensures the safety and reliability of digital environments. Threat
XIntel's mission is to provide reliable security services that enable businesses to grow with confidence, offering peace of mind in the ever-evolving landscape of cybersecurity.
Role Description
We are seeking an experienced Azure Databricks Data Engineer to architect and implement secure, scalable lakehouse solutions using the medallion architecture (bronze, silver, gold layers).
The ideal candidate will have strong hands-on expertise in Azure Databricks, Apache Spark, Delta Lake, and large-scale data migration strategies. This role involves designing high-performance ETL/ELT pipelines, implementing governance controls, and delivering production-grade data products for business consumption.
Key Responsibilities
- Architect and implement lakehouse solutions using medallion architecture on Azure Databricks
- Design and develop ETL/ELT pipelines to migrate data from transactional systems to Azure lakehouse environments
- Build and optimize Spark jobs for large-scale batch and streaming data processing
- Create curated gold-layer datasets ready for analytics and reporting
- Implement schema evolution and ingestion using Databricks Autoloader and Delta Lake
- Enable and manage Delta Lake features such as ACID transactions and time travel
- Ensure security, governance, and compliance using Unity Catalog, Azure Key Vault, and RBAC
- Collaborate with data scientists, analysts, and stakeholders to deliver data products
- Monitor, troubleshoot, and optimize workloads for performance and cost efficiency
- Mentor junior engineers and contribute to architectural best practices
Required Technical Skills
- 5+ years of experience in data engineering
- 2+ years of hands-on experience in Azure Databricks and Apache Spark
- Strong expertise in Delta Lake, Databricks Autoloader, and Structured Streaming
- Proficiency in PySpark and SQL
- Strong understanding of medallion architecture and lakehouse design
- Experience with Azure Data Lake Storage Gen2, Azure Synapse, Event Hub, and Azure Functions
- Experience implementing CI/CD using Azure Dev Ops, Git Hub Actions, or Terraform
- Knowledge of Unity Catalog, MLflow, and Delta Live Tables
- Experience in schema design, data modeling, and data quality frameworks
- Experience in data migration strategies including CDC and streaming ingestion
Preferred Qualifications
- Azure certifications such as DP-203 or Azure Solutions Architect Expert
- Experience with data mesh or domain-driven design
- Familiarity with Databricks REST APIs and job orchestration
- Experience in regulated industries with strong data governance practices
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×