More jobs:
Applied AI, Evaluation Engineer
Job in
Paris, Lamar County, Texas, 75460, USA
Listed on 2026-02-17
Listing for:
Mistral AI
Full Time
position Listed on 2026-02-17
Job specializations:
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on (Use the "Apply for this Job" box below)..
About The Job
The Applied AI team is Mistral's customer-facing technical organization. We work directly with enterprise clients from pre-sales through implementation to deploy cutting-edge AI solutions that deliver measurable business impact. Our team combines deep ML expertise with strong customer engagement skills, operating like startup CTOs who own end-to-end project execution.
However, the AI graveyard is full of great ideas nobody could measure or prototypes that never made it to production. As a first Evaluation Engineer, you'll design the methodology, build the infrastructure, and define what "ready for production" means across verticals and use cases.
You will design and implement evaluation systems that help our customers understand model performance across their specific use cases, build robust evaluation infrastructure, and work closely with both research and customer-facing teams.
Research builds evals for frontier capabilities but customers don't care about MMLU scores. We need in Applied AI evals and frameworks for customer reality domain-specific, risk-aware, production-grade. The kind that tell you whether your medical summarization model will hallucinate drug interactions, or whether your legal assistant will invent case citations.
This role sits at the intersection of research, engineering, and solutions, you will play a critical cross role in measuring, understanding, and improving the capabilities of our models for our enterprise customers.
What you will do
* Design and implement comprehensive evaluation frameworks to measure LLM capabilities across diverse customer use cases, including text generation, reasoning, code, and domain-specific applications
* Build scalable evaluation infrastructure and pipelines that enable rapid, reproducible assessment of model performance
* Develop novel evaluation methodologies to assess emerging capabilities or verticalized use cases (cybersecurity, finance, healthcare, etc.) and enable the Solutions (Deployment Strategist and Applied AI) on these topics.
* Create custom evaluation suites tailored to enterprise customers' specific needs, working closely with them to understand their requirements and success criteria
* Collaborate with research teams to translate evaluation insights into model improvements and training decisions
* Partner with product teams to continuously improve our evaluation tooling based on customer feedback
How We Work in Applied AI
* We care about people and outputs.
* What matters is what you ship, not the time you spend on it
* Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.
* Always ask why. The best solutions come from deep understanding, not from copying what worked before
* We say what we mean. Feedback is direct, timely, and given because we care.
* No politics. Low ego, high standards.
* We embrace an unstructured environment and find joy in it.
About you
* You are fluent in English
* 3+ years of experience in ML evaluation, benchmarking for LLM or agentic systems
* You have proven experience in AI or machine learning product…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×