Measurement Scientist, AI Evaluation Platform
Listed on 2026-02-20
-
IT/Tech
AI Engineer, Data Scientist
Measurement Scientist, AI Evaluation Platform
Seattle, Washington, United States Software and Services
Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or App Store experience we deliver is the result of us making each other’s ideas stronger. That happens because every one of us shares a belief that we can make something wonderful and share it with the world, changing lives for the better.
It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you’ll do more than join something — you’ll add something.
Our team, part of Apple Services Engineering, is building the scientific foundation for how AI systems are evaluated across Apple. We are seeking a Measurement Scientist to ensure that our evaluation methods are not just sophisticated, but scientifically valid and trustworthy . In this role, you will apply psychometric theory , validity frameworks, and statistical rigor to establish measurement standards for AI evaluation — ensuring that when we claim an evaluator measures "helpfulness" or "safety," it actually does.
We are looking for individuals across a range of experience levels.
This role uniquely bridges measurement science and cutting-edge AI evaluation. You will develop methods for validating LLM-as-judge evaluators, automated benchmarks, and human evaluations. And you will create statistical tools that help engineers trust their evaluation results. You will work on an interdisciplinary team with ML researchers to solve new problems in AI evaluation. Your work will be both published at top measurement and ML venues and productionized into the evaluation SDK used across Apple.
The successful candidate will have deep expertise in psychometrics and measurement theory , with the ability to apply these principles to novel AI evaluation challenges. You will work collaboratively with ML researchers, platform engineers, and evaluation practitioners to translate measurement science into practical tools that scale across the organization.
- Design validity frameworks for AI evaluation, ensuring that automated metrics, LLM-as-judge systems, and human evaluation protocols measure what they claim to measure across diverse contexts.
- Develop and apply psychometric methods to assessing the quality of benchmarks, for example drawing on frameworks like item response theory (IRT)
- Create calibration and bias detection systems for automated evaluators, ensuring LLM-as-judge scores are interpretable, consistent, and free from systematic biases.
- Build robust statistical tools for practitioners for sample-size planning, quantifying uncertainty , controlling error rates, and visualizing data.
- Establish measurement standards for evaluator transfer and generalization, including methods to quantify or predict when evaluators will maintain validity across domains, languages, or contexts.
- Validate novel evaluation methods in collaboration with ML researchers, ensuring intelligent search algorithms discover statistically meaningful patterns and synthetic data generation produces representative samples.
- Collaborate with platform engineers to product ionize measurement methods into evaluation infrastructure, creating self-service tools for validity checking, reliability testing, and interpretable outputs (report cards, warnings, confidence metrics).
- Publish research at top measurement venues and/or ML conferences (NeurIPS, ICML, ICLR), advancing both measurement science and AI evaluation.
- Collaborate across disciplines with ML researchers developing novel methods, platform engineers building scalable infrastructure, and evaluation practitioners using these tools in production.
- PhD in Psychometrics, Educational Measurement, Quantitative Psychology , Statistics, or equivalent research/work experience.
- Deep expertise in modeling test data (IRT or related methods) and construct validation.
- Strong statistical foundation including experimental design, power…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).