More jobs:
Data Engineer; Databricks
Job in
Cape Town, 7100, South Africa
Listed on 2026-02-06
Listing for:
iOCO Pty Ltd
Full Time
position Listed on 2026-02-06
Job specializations:
-
IT/Tech
Data Engineer, Data Science Manager, Cloud Computing, Big Data
Job Description & How to Apply Below
Overview
We are looking for a skilled Data Engineer with strong Python and Databricks experience to design, build, and maintain scalable data pipelines and platforms that enable analytics, reporting, and advanced data use cases. You will work closely with Data Scientists, BI Developers, and business stakeholders to deliver high-quality, production-ready data solutions across cloud-based environments.
This role requires strong engineering fundamentals, hands-on Databricks development, and experience working with large, complex datasets.
Responsibilities- Design, develop, and maintain scalable data pipelines using Python and Databricks
- Build robust ETL/ELT processes to ingest structured and unstructured data
- Develop and optimize Spark jobs for performance and reliability
- Implement data models to support analytics and downstream consumption
- Integrate multiple data sources (APIs, databases, files, streaming sources)
- Ensure data quality, validation, and governance standards are met
- Collaborate with Data Scientists and BI teams to operationalize analytics solutions
- Implement monitoring, logging, and error-handling mechanisms
- Support production deployments and troubleshoot pipeline issues
- Participate in code reviews and contribute to engineering best practices
- Document solutions and processes clearly
- Experience with Azure Databricks or AWS Databricks
- Knowledge of data orchestration tools (Airflow, ADF, etc.)
- Experience with streaming technologies (Kafka, Event Hubs, etc.)
- Exposure to data governance and security frameworks
- Experience supporting machine learning pipelines
- Understanding of Dev Ops practices
- 5+ years’ experience as a Data Engineer or similar role
- Strong proficiency in Python for data engineering
- Hands-on experience with Databricks (notebooks, jobs, workflows)
- Solid understanding of Apache Spark concepts
- Experience building ETL/ELT pipelines
- Strong SQL skills
- Experience working in cloud environments (AWS / Azure / GCP)
- Familiarity with Delta Lake or similar lakehouse architectures
- Version control (Git) and CI/CD exposure
- Strong problem-solving and analytical skills
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×