×
Register Here to Apply for Jobs or Post Jobs. X

Staff Engineer - ML Platform

Job in Dubai, UAE/Dubai
Listing for: talabat
Full Time position
Listed on 2025-11-23
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer, Cloud Computing, Data Scientist
Salary/Wage Range or Industry Benchmark: 120000 - 200000 AED Yearly AED 120000.00 200000.00 YEAR
Job Description & How to Apply Below

Job Description Summary

As the leading delivery platform in the region, we have a unique responsibility and opportunity to positively impact millions of customers, restaurant partners, and riders. To achieve our mission, we must scale and continuously evolve our machine learning capabilities, including cutting-edge Generative AI (genAI) initiatives. This demands robust, efficient, and scalable ML platforms that empower our teams to rapidly develop, deploy, and operate intelligent systems.

As an ML Platform Engineer, your mission is to design, build, and enhance the infrastructure and tooling that accelerates the development, deployment, and monitoring of traditional ML and genAI models ’ll collaborate closely with data scientists, ML engineers, genAI specialists, and product teams to deliver seamless ML workflows—from experimentation to production serving—ensuring operational excellence across our ML and genAI systems.

Responsibilities

Design, build, and maintain scalable, reusable, and reliable ML platforms and tooling that support the entire ML lifecycle, including data ingestion, model training, evaluation, deployment, and monitoring for both traditional and generative AI models.

Develop standardized ML workflows and templates using MLflow and other platforms, enabling rapid experimentation and deployment cycles.

Implement robust CI / CD pipelines, Docker containerization, model registries, and experiment tracking to support reproducibility, scalability, and governance in ML and genAI.

Collaborate closely with genAI experts to integrate and optimize genAI technologies, including transformers, embeddings, vector databases (e.g., Pinecone, Redis, Weaviate), and real-time retrieval-augmented generation (RAG) systems.

Automate and streamline ML and genAI model training, inference, deployment, and versioning workflows, ensuring consistency, reliability, and adherence to industry best practices.

Ensure reliability, observability, and scalability of production ML and genAI workloads by implementing comprehensive monitoring, alerting, and continuous performance evaluation.

Integrate infrastructure components such as real-time model serving frameworks (e.g., Tensor Flow Serving, NVIDIA Triton, Seldon), Kubernetes orchestration, and cloud solutions (AWS / GCP) for robust production environments.

Drive infrastructure optimization for generative AI use-cases, including efficient inference techniques (batching, caching, quantization), fine-tuning, prompt management, and model updates at scale.

Partner with data engineering, product, infrastructure, and genAI teams to align ML platform initiatives with broader company goals, infrastructure strategy, and innovation roadmap.

Contribute actively to internal documentation, onboarding, and training programs, promoting platform adoption and continuous improvement.

Requirements Technical Experience

Strong software engineering background with experience in building distributed systems or platforms designed for machine learning and AI workloads.

Expert-level proficiency in Python and familiarity with ML frameworks (Tensor Flow, PyTorch), infrastructure tooling (MLflow, Kubeflow, Ray), and popular APIs (Hugging Face, OpenAI, Lang Chain).

Experience implementing modern MLOps practices, including model lifecycle management, CI / CD, Docker, Kubernetes, model registries, and infrastructure-as-code tools (Terraform, Helm).

Demonstrated experience working with cloud infrastructure, ideally AWS or GCP, including Kubernetes clusters (GKE / EKS), serverless architectures, and managed ML services (e.g., Vertex AI, Sage Maker).

Proven experience with generative AI technologies : transformers, embeddings, prompt engineering strategies, fine-tuning vs. prompt-tuning, vector databases, and retrieval-augmented generation (RAG) systems.

Experience designing and maintaining real-time inference pipelines, including integrations with feature stores, streaming data platforms (Kafka, Kinesis), and observability platforms.

Familiarity with SQL and data warehouse modeling; capable of managing complex data queries, joins, aggregations, and transformations.

Solid understanding of ML monitoring, including…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary