Machine Learning Scientist
Listed on 2025-12-21
-
IT/Tech
Data Scientist, Machine Learning/ ML Engineer, AI Engineer, Artificial Intelligence
About LMArena
LMArena is the open platform for evaluating how AI models perform in the real world. Created by researchers from UC Berkeley’s Sky Lab, our mission is to measure and advance the frontier of AI for real-world use.
Millions of people use LMArena each month to explore how frontier systems perform — and we use our community’s feedback to build transparent, rigorous, and human-centered model evaluations. Leading enterprises and AI labs rely on our evaluations to understand real-world reliability, alignment, and impact. Our leaderboards are the gold standard for AI performance — trusted by leaders across the AI community and shaping the global conversation on model reliability and progress.
We’re a team of researchers, engineers, academics, and builders from places like UC Berkeley, Google, Stanford, Deep Mind, and Discord. We seek truth, move fast, and value craftsmanship, curiosity, and impact over hierarchy. We’re building a company where thoughtful, curious people from all backgrounds can do their best work. Everyone on our team is a deep expert in their field — our office radiates excellence, energy, and focus.
About the RoleLMArena is seeking a variety of Machine Learning Scientist to help advance how we evaluate and understand AI models You’ll help design and analyse experiments that uncover what makes models useful, trustworthy and capable through human preference signals. Your work will contribute to the scientific foundations of understanding AI at scale.
This role is deeply interdisciplinary. You’ll work closely with engineers, product teams, marketing and the broader research community to develop new methods for comparing models, analyzing preference data, and disentangling performance factors like style, reasoning, and robustness. Your work will inform both the public leader board and the tools we provide to model developers.
If you’re excited by open-ended questions, rigorous evaluation, and research that’s grounded in real-world impact, you’ll find a meaningful home here. We’re looking for:
Hands‑on experience training large‑scale models, including reward models, preference models, and fine‑tuning LLMs with methods like RLHF, DPO, and contrastive learning.
Strong foundation in ML and statistics, with a track record of designing novel training objectives, evaluation schemes, or statistical frameworks to improve model reliability and alignment.
Fluent in the full experimental stack, from dataset design and large‑batch training to rigorous evaluation and ablation, with an eye for what scales to production.
Deeply collaborative mindset, working closely with engineers to product ionize research insights and iterating with product teams to align modeling goals with user needs.
Design and conduct experiments to evaluate AI model behavior across reasoning, style, robustness, and user preference dimensions
Develop new metrics, methodologies, and evaluation protocols that go beyond traditional benchmarks
Analyze large‑scale human voting and interaction data to uncover insights into model performance and user preferences
Collaborate with engineers to implement and scale research findings into production systems
Prototype and test research ideas rapidly, balancing rigor with iteration speed
Author internal reports and external publications that contribute to the broader ML research community
Partner with model providers to shape evaluation questions and support responsible model testing
Contribute to the scientific integrity and transparency of the LMArena leader board and tools
PhD or equivalent research experience in Machine Learning, Natural Language Processing, Statistics, or a related field
Strong understanding of LLMs and modern deep learning architectures (e.g., Transformers, diffusion models, reinforcement learning with human feedback)
Proficiency in Python and ML research libraries such as PyTorch, JAX, or Tensor FlowDemonstrated ability to design and analyze experiments with statistical rigor
Experience publishing research or working on open‑source projects in ML, NLP, or AI evaluation
Comfortable working with real‑world usage data and designing metrics beyond standard…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).