Staff Engineer - ML Platform
Company Description
Since launching in Kuwait in 2004, talabat, the leading on-demand food and Q-commerce app for everyday deliveries, has been offering convenience and reliability to its customers. talabat’s local roots run deep, offering a real understanding of the needs of the communities we serve in eight countries across the region.
We harness innovative technology and knowledge to simplify everyday life for our customers, optimize operations for our restaurants and local shops, and provide our riders with reliable earning opportunities daily.
Here at talabat, we are building a high performance culture through engaged workforce and growing talent density. We're all about keeping it real and making a difference. Our 6,000+ strong talabaty are on an awesome mission to spread positive vibes. We are proud to be a multi great place to work award winner.
Job Description SummaryAs the leading delivery platform in the region, we have a unique responsibility and opportunity to positively impact millions of customers, restaurant partners, and riders. To achieve our mission, we must scale and continuously evolve our machine learning capabilities, including cutting-edge Generative AI (genAI) initiatives. This demands robust, efficient, and scalable ML platforms that empower our teams to rapidly develop, deploy, and operate intelligent systems.
As an ML Platform Engineer, your mission is to design, build, and enhance the infrastructure and tooling that accelerates the development, deployment, and monitoring of traditional ML and genAI models ’ll collaborate closely with data scientists, ML engineers, genAI specialists, and product teams to deliver seamless ML workflows—from experimentation to production serving—ensuring operational excellence across our ML and genAI systems.
Responsibilities- Design, build, and maintain scalable, reusable, and reliable ML platforms and tooling that support the entire ML lifecycle, including data ingestion, model training, evaluation, deployment, and monitoring for both traditional and generative AI models.
- Develop standardized ML workflows and templates using MLflow and other platforms, enabling rapid experimentation and deployment cycles.
- Implement robust CI/CD pipelines, Docker containerization, model registries, and experiment tracking to support reproducibility, scalability, and governance in ML and genAI.
- Collaborate closely with genAI experts to integrate and optimize genAI technologies, including transformers, embeddings, vector databases (e.g., Pinecone, Redis, Weaviate), and real-time retrieval-augmented generation (RAG) systems.
- Automate and streamline ML and genAI model training, inference, deployment, and versioning workflows, ensuring consistency, reliability, and adherence to industry best practices.
- Ensure reliability, observability, and scalability of production ML and genAI workloads by implementing comprehensive monitoring, alerting, and continuous performance evaluation.
- Integrate infrastructure components such as real-time model serving frameworks (e.g., Tensor Flow Serving, NVIDIA Triton, Seldon), Kubernetes orchestration, and cloud solutions (AWS/GCP) for robust production environments.
- Drive infrastructure optimization for generative AI use-cases, including efficient inference techniques (batching, caching, quantization), fine-tuning, prompt management, and model updates at scale.
- Partner with data engineering, product, infrastructure, and genAI teams to align ML platform initiatives with broader company goals, infrastructure strategy, and innovation roadmap.
- Contribute actively to internal documentation, onboarding, and training programs, promoting platform adoption and continuous improvement.
- Strong software engineering background with experience in building distributed systems or platforms designed for machine learning and AI workloads.
- Expert‑level proficiency in Python and familiarity with ML frameworks (Tensor Flow, PyTorch), infrastructure tooling (MLflow, Kubeflow, Ray), and popular APIs (Hugging Face, OpenAI, Lang Chain).
- Experience implementing modern MLOps practices, including model lifecycle management, CI/CD,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).