ML Engineer; MLOps - Remote
Help to revolutionise a fast-moving industry with cutting-edge AI :
Our client is a globally recognised brand with deep-rooted expertise. They are heavily invested in leveraging AI to combine their domain expertise with SOTA techniques, solidifying their position as a leader in the field. You'll join a global team with a distributed set of skills including Research, Applied AI and Engineering.
They are seeking MLOps Engineers to help architect the future of communication through AI. This isn't just another engineering role – it's an opportunity to pioneer systems that transform how companies connect with their customers
What You’ll Be DoingYou’ll be designing and optimising production-grade MLOps pipelines that bring cutting-edge Generative AI and LLMs from experimentation to real-world impact. Your expertise will directly influence how some of the world's leading brands enhance their strategies.
What You’ll Build- Production-Ready GenAI Infrastructure :
Design and deploy scalable MLOps pipelines specifically optimized for GenAI applications and large language models - State-of-the-Art Model Deployment :
Implement and fine-tune advanced models like GPT and similar architectures in production environments - Hybrid AI Systems :
Create solutions that integrate traditional ML techniques with cutting-edge LLMs to deliver powerful insights - Automated MLOps Workflows :
Build robust CI / CD pipelines for ML, enabling seamless testing, validation, and deployment - Cost-Efficient Cloud Infrastructure :
Optimize cloud resources to maximize performance while maintaining cost efficiency - Governance and Versioning Systems :
Establish best practices for model versioning, reproducibility, and responsible AI deployment - Integrated Data Pipelines :
Utilize Databricks to construct and manage sophisticated data and ML pipelines - Monitoring Ecosystems :
Implement comprehensive monitoring systems to ensure reliability and performance
- 4+ years of hands-on experience in MLOps, Dev Ops, or ML Engineering roles
- Experience with MLflow, DVC, Prometheus, and Grafana for versioning and monitoring
- Proven expertise deploying and scaling Generative AI models (GPT, Stable Diffusion, BERT)
- Proficiency with Python and ML frameworks (Tensor Flow, PyTorch, Hugging Face)
- Strong cloud platform experience (AWS, GCP, Azure) and managed AI / ML services
- Practical experience with Docker, Kubernetes, and container orchestration
- Databricks expertise, including ML workflows and data pipeline integration
- Bachelor's or Master's degree in Computer Science, Engineering, or related field (or equivalent experience)
- Fluency in written and spoken English
- You're a builder at heart – someone who loves creating scalable, production-ready systems
- You balance technical excellence with pragmatic delivery
- You're excited about pushing boundaries in GenAI and LLM technologies
- You can communicate complex concepts effectively to diverse stakeholders
- You enjoy mentoring junior team members and elevating the entire technical organization
You’ll be working with a modern data stack designed to process large-scale information, automate analysis pipelines, and integrate seamlessly with AI-driven workflows. This is your chance to make a significant impact on projects that push the boundaries of AI-powered insights and automation in industry.
Location - this is a remote opportunity, but the successful candidate must reside permanently in Spain and not require sponsorship
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).