More jobs:
Job Description & How to Apply Below
Duration
12+ Months
Job DescriptionWe are looking for a Data Engineer with strong experience in Databricks to build, optimize, and maintain scalable data pipelines and platforms.
Responsibilities- Develop and optimize data pipelines using Databricks & Apache Spark
- Build ETL/ELT workflows using PySpark / Spark SQL
- Integrate data from multiple sources and ensure data quality
- Collaborate with analytics and business teams
- Strong hands-on experience with Databricks
- Proficiency in PySpark, Spark SQL, and Python
- Experience with cloud platforms (
AWS / Azure / GCP
) - Good understanding of data engineering and ETL concepts
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×