×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Quantitative Analyst

Job in Johannesburg, 2000, South Africa
Listing for: Nedbank
Full Time position
Listed on 2026-02-14
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below

Job Purpose

To design, build, and maintain the core data infrastructure that powers Nedbank’s modern credit and customer analytics ecosystem. The role is responsible for developing scalable data pipelines, architecting and ope rationalising reusable feature stores, and implementing robust metadata and data governance frameworks.

The incumbent will ensure that high‑quality, well‑documented, production‑ready data is consistently available for modelling teams, real‑time systems, and downstream analytics. This role provides the foundational data engineering capability that enables model developers, MLOps engineers, and analytics teams to deliver models faster, safer, and with higher predictive integrity. The incumbent drives the standardisation of data patterns, reusable assets, and governed feature frameworks, which materially reduce model cycle time and operational risk.

Job

Requisition Details

REQ#142178

Location:

Johannesburg, Gauteng
Closing Date: 24 February 2026
Talent Acquisition:
Bongiwe Mchunu

Job Family

Investment Banking

Career Stream

Quantitative

Leadership Pipeline

Manage Self:
Professional

Job Responsibilities
  • Design, build, and optimise large‑scale data pipelines using Python, Spark, and distributed compute frameworks to support high‑throughput modelling and analytics workloads.
  • Architect, implement, and maintain the enterprise feature store, ensuring consistency, versioning, reusability, and governance across modelling teams and real‑time scoring environments.
  • Establish and maintain metadata management frameworks, covering data lineage, data contracts, schema evolution, feature definitions, and end‑to‑end traceability.
  • Develop automated data ingestion and transformation workflows, ensuring repeatability, performance optimisation, and alignment with modern engineering practices.
  • Implement data quality monitoring, validation rules, and observability tooling (e.g., schema checks, drift detection, pipeline health metrics) to ensure reliable, production‑grade data.
  • Collaborate with model developers, MLOps engineers, and platform teams to enable seamless model training, deployment, and monitoring via well‑engineered data foundations.
  • Contribute to the design and evolution of the modern modelling ecosystem, including feature store architecture, metadata strategy, and standardised data patterns.
  • Ensure data governance and compliance through documentation, automated controls, and integration with internal regulatory and risk frameworks.
  • Drive automation and simplification across data preparation processes to reduce modelling cycle times and improve platform scalability.
  • Conduct performance tuning and optimisation of Spark pipelines, storage formats, and distributed compute resources.
  • Participate in code reviews, design discussions, and engineering best‑practice forums, promoting clean, modular, and maintainable data engineering standards.
  • Support junior team members through mentoring, technical guidance, and knowledge sharing, contributing to uplift across the broader modelling community.
  • Conduct horizon scanning on emerging data engineering, metadata, and feature store technologies.
  • Prototype new data frameworks, storage formats, and distributed processing techniques.
Professional Exposure

The Ideal Candidate Will Have Practical, Hands‑on Exposure To
  • Software Engineering / Coding Fundamentals:
    Solid grounding in computer science/coding principles, including Object‑Oriented Programming (OOP), design patterns, data structures, and algorithmic complexity (Big‑O).
  • Distributed Computing & Big Data:
    Working with large-scale data processing systems and distributed environments.
  • Modern Dev Ops Integration:
    Active usage of CI/CD pipelines, version control (Git), and containerisation technologies (Docker/Kubernetes) within a microservices or API‑driven architecture.
  • Deep Learning & Optimisation:
    Proficiency with ML frameworks (e.g., Tensor Flow, PyTorch, Scikit‑learn) and application of continuous/discrete mathematical optimisation techniques.
  • Model Governance:
    Product ionising models with rigorous tracking, specific versioning, and governance using tools such as MLFlow.
Professional Knowledge Core Programming & Engineering
  • Expert…
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary