More jobs:
Job Description & How to Apply Below
Overview
We are looking for an experienced Data Engineer with strong expertise in Databricks and Apache Spark to build and optimize scalable data pipelines on cloud platforms. The ideal candidate will have hands-on experience delivering ETL/ELT workflows, transforming large datasets, and supporting analytics and data platform initiatives. Experience across Azure is required, while exposure to GCP and modern orchestration tools is a plus.
Mandatory Skills
Strong hands-on experience with Azure Databricks
Expert-level proficiency in Apache Spark (PySpark/Scala)
Solid understanding of ETL/ELT pipelines, batch & streaming data processing
Proficient in Python and SQL
Good-to-Have Skills
Experience with GCP data services:
Big Query, Dataflow, Data Proc
Knowledge of Airflow / Cloud Composer, DAG creation and orchestration
Familiarity with GCP IAM, storage, and networking concepts
Exposure to data pipelines across multi-cloud environments
Experience with orchestration, workflow automation, and CI/CD for data pipelines
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×