More jobs:
Expert Data Engineer
Job in
Pretoria, 0002, South Africa
Listed on 2025-12-31
Listing for:
Sabenza IT & Recruitment
Full Time
position Listed on 2025-12-31
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
Job Description
About the Role :
The Data Engineer will work on enterprise-wide data provisioning, spanning multiple data governance domains and data assets. Responsibilities include ensuring secure data sharing, adhering to protection and compliance requirements, supporting enterprise Data & Analytics initiatives (including high-priority use cases), and enabling data provisioning for operational processes.
Role Responsibilities :Data Engineers in this environment are custodians of critical data assets and pipelines. Responsibilities include :
- Building and maintaining large-scale Big Data pipelines on cloud-based data platforms
- Ensuring secure and compliant data sharing aligned with information classification standards
- Supporting enterprise Data & Analytics initiatives and high-priority use cases
- Continuously improving and automating data engineering processes
- Evaluating emerging tools and technologies to drive innovation
- Mentoring and upskilling team members
- Maintaining high-quality technical documentation
Candidates must demonstrate strong, above-average expertise in :
Cloud & Infrastructure
- Terraform
- Docker
- Linux / Unix
- Cloud Formation
- Code Build / Code Pipeline
- Cloud Watch
- SNS
- S3
- Kinesis Streams (Kinesis, Firehose)
- Lambda
- DynamoDB
- Step Functions
- Parameter Store
- Secrets Manager
Programming & Data Engineering
- Python 3.x
- SQL (Oracle / Postgre
SQL) - Py Spark
- Boto3
- ETL development
- Big Data platforms
- Power Shell / Bash
Data Platforms & Tools
- Glue
- Athena
- Technical data modelling & schema design (hands‑on, not drag‑and‑drop)
- Kafka
- AWS EMR
- Redshift
Business & Analytics
- Business Intelligence (BI) experience
- Strong data governance and security understanding
- Advanced data modelling expertise, especially in Oracle SQL
- Strong analytical skills for large, complex datasets
- Experience with testing, data validation, and transformation accuracy
- Excellent documentation, written, and verbal communication skills
- Ability to work independently, multitask, and collaborate within teams
- Experience building data pipelines using AWS Glue, Data Pipeline, or similar
- Familiarity with AWS S3, RDS, and DynamoDB
- Solid understanding of software design patterns
- Experience preparing technical specifications, designing, coding, testing, and debugging solutions
- Strong organisational abilities
- Knowledge of Parquet, AVRO, JSON, XML, CSV
- Experience with Data Quality tools such as Great Expectations
- Experience working with REST APIs
- Basic networking knowledge and troubleshooting skills
- Understanding of Agile methodologies
- Experience with documentation tools such as Confluence and JIRA
- Relevant IT, Business, or Engineering Degree
- Experience developing technical documentation and artefacts
- Experience with enterprise collaboration tools
- AWS Cloud Practitioner
- AWS Sys Ops Associate
- AWS Developer Associate
- AWS Architect Associate
- AWS Architect Professional
- Hashi Corp Terraform Associate
Requirements
Terraform, Python, Docker, AWS, SQL
#J-18808-LjbffrNote that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×