×
Register Here to Apply for Jobs or Post Jobs. X

Azure Platform Infrastructure Engineer - Databricks, Python

Job in Toronto, Ontario, C6A, Canada
Listing for: Astra-North Infoteck Inc. ~ Conquering today’s challenges, achieving tomorrow’s vision!
Full Time position
Listed on 2026-02-10
Job specializations:
  • IT/Tech
    Data Engineer, Azure
Job Description & How to Apply Below

Overview

Role focuses on designing and implementing data infrastructure and pipelines in Microsoft Azure, with emphasis on Databricks, Data Factory, Data Lake Storage, and Synapse. The role requires strong Dev Ops practices, data modeling knowledge, and awareness of data privacy and compliance.

Responsibilities
  • Solid understanding of Azure infrastructure (subscriptions, resource groups, resources, access control with RBAC, Azure AD integrations, network concepts, password credential, key management and data protection).
  • Strong hands-on knowledge of Azure Databricks, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Synapse Serverless Dedicated Spark pools, Python, PySpark and T-SQL.
  • Experience designing and developing scripts for ETL processes and automation in Azure Data Factory and Azure Databricks.
  • High proficiency in Git, Jenkins and Dev Ops processes to maintain and resolve issues with data pipelines in production.
  • Knowledge of implementing Azure technologies and networking via Terraform and the ability to fix issues with Azure infrastructure in production.
  • Good understanding of data modeling, Data mart, Data Lakehouse architecture, Slowly Changing Dimensions (SCD), data mesh and delta lake.
  • Solid understanding of data privacy and compliance regulations and best practices for preserving customer data.
Desirable Skills
  • Microsoft Azure
  • Databricks
Role Description
  • Solid understanding of Azure infrastructure (subscriptions, resource groups, resources, access control with RBAC, Azure AD integrations, network concepts, password credential, key management and data protection).
  • Strong hands-on knowledge of Azure Databricks, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Synapse Serverless Dedicated Spark pools, Python, PySpark and T-SQL.
  • Experience designing and developing scripts for ETL processes and automation in Azure Data Factory and Azure Databricks.
  • High proficiency in Git, Jenkins and Dev Ops processes to maintain and resolve issues with data pipelines in production.
  • Knowledge of implementing Azure technologies and networking via Terraform and the ability to fix issues with Azure infrastructure in production.
  • Good understanding of data modeling, Data mart, Data Lakehouse architecture, Slowly Changing Dimensions (SCD), data mesh and delta lake.
  • Solid understanding of data privacy and compliance regulations and best practices for preserving customer data.
Experience Required

6-8 years

#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary