×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Johannesburg, 2000, South Africa
Listing for: Nedbank
Full Time position
Listed on 2025-12-17
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below

Nedbank, Johannesburg, Gauteng, South Africa

Data Engineer

The purpose of the Data Engineer is to leverage their data expertise and data-related technologies, in line with the Nedbank Data Architecture Roadmap, to advance technical thought leadership for the Enterprise, deliver fit-for-purpose data products, and support data initiatives. In addition, Data Engineers enhance the data infrastructure of the bank to enable advanced analytics, machine learning and artificial intelligence by providing clean, usable data to stakeholders.

They also create data pipelines, ingestion, provisioning, streaming, self-service, API and solutions around big data that support the Bank’s strategy to become a data-driven organisation.

Job Responsibilities
  • Responsible for the maintenance, improvement, cleaning, and manipulation of data in the bank’s operational and analytics databases.
  • Build and manage scalable, optimised, supported, tested, secure, and reliable data infrastructure using databases (DB2, Postgre

    SQL, MSSQL, HBase, No

    SQL, etc), data lake storage (Azure Data Lake Gen 2), cloud-based solutions (SAS, Azure Databricks, Azure Data Factory, HDInsight) and data platforms (SAS, Ab Initio, Denodo, Netezza, Azure Cloud). Ensure data security and privacy in collaboration with Information Security, CISO and Data Governance.
  • Build and maintain data pipelines for ingestion, provisioning, streaming and API to integrate data from on‑premise and cloud data engineering tool sets.
  • Efficiently extract data from golden and trusted sources, load the Nedbank Data Warehouse (Data Reservoir, Atomic Data Warehouse, Enterprise Data Mart).
  • Provide data to business lines, regulatory and compliance marts through self-service data virtualisation, and to applications or Nedbank data consumers.
  • Transform data to a common data model for reporting and data analysis, providing data in a consistent and usable format.
  • Handle big data technologies (Hadoop), streaming (Kafka) and data replication (IBM Info Sphere Data Replication).
  • Drive utilisation of data integration tools (Ab Initio) and cloud data integration tools (Azure Data Factory and Azure Databricks).
  • Collaborate with data analysts, software engineers, data modelers, data scientists, scrum masters and data warehouse teams to contribute to data architecture detail designs and own epics end‑to‑end, ensuring data solutions deliver business value.
  • Implement data quality checks in pipelines to maintain high data accuracy, consistency and security.
  • Ensure performance of the data warehouse, integration patterns, batch and real‑time jobs, streaming and APIs.
  • Build APIs that enable the data‑driven organisation, optimising the data warehouse for APIs in collaboration with software engineers.
Essential Qualifications (NQF Level)
  • Matric / Grade 12 / National Senior Certificate
  • Advanced Diplomas/National first degrees
Preferred Qualification
  • Field of Study: BCom, BSc, BEng
Preferred Certifications
  • Cloud (Azure, AWS), Dev Ops or data engineering certification. Any data science certification (Coursera, Udemy, SAS Data Scientist, Microsoft Data Scientist) will be an added advantage.
Minimum Experience Level
  • Total years of experience: 3 – 6 years
  • Experience designing, building, and maintaining data warehouses and data lakes.
  • Experience with big data technologies such as Hadoop, Spark, and Hive.
  • Experience with programming languages such as Python, Java, and SQL.
  • Experience with relational databases and No

    SQL databases.
  • Experience with cloud computing platforms such as AWS, Azure, and GCP.
  • Experience with data visualization tools.
  • Result‑driven, analytical creative thinker with demonstrated problem‑solving ability.
Technical / Professional Knowledge
  • Cloud data engineering (Azure, AWS, Google)
  • Data warehousing
  • Databases:
    Postgre

    SQL, MS SQL, IBM DB2, HBase, MongoDB
  • Programming:
    Python, Java, SQL
  • Data analysis and data modelling
  • Data pipelines and ETL tools:
    Ab Initio, ADB, ADF, SAS ETL
  • Agile delivery
  • Problem‑solving skills
Behavioural Competencies
  • Decision making
  • Influencing
  • Communication
  • Innovation
  • Technical/Professional knowledge and skills
  • Building partnerships
  • Continuous learning

Contact
:
Please contact the Nedbank Recruiting Team at +27 860 555 566.

Seniority level:
Mid‑Senior level

Employment type:

Full‑time

Job function:
Information Technology

#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary