Senior Platform Engineer, AI Evaluation
Listed on 2025-12-12
-
IT/Tech
Machine Learning/ ML Engineer, AI Engineer
ABOUT KHAN ACADEMY
Khan Academy is a nonprofit with the mission to deliver a free, world‑class education to anyone, anywhere. Our proven learning platform offers free, high‑quality supplemental learning content and practice that cover Pre‑K - 12th grade and early college core academic subjects, focusing on math and science. We have over 181 million registered learners globally and are committed to improving learning outcomes for students worldwide, focusing on learners in historically under‑resourced communities.
OURCOMMUNITY
Our students, teachers, and parents come from all walks of life, and so do we. Our team includes people from academia, traditional/non‑traditional education, big tech companies, and tiny startups. We hire great people from diverse backgrounds and experiences because it makes our company stronger. We value diversity, equity, inclusion, and belonging as necessary to achieve our mission and impact the communities we serve.
We know that transforming education starts in‑house with learning about ourselves and our colleagues. We strive to be world‑class in investing in our people and commit to developing you as a professional.
We’re looking for an AI Platform Engineer to evolve and extend our internal evaluation framework for assessing the quality of our AI‑driven experiences at Khan Academy. This engineer will have worked with enough eval systems to quickly make sense of Khan’s internal eval framework and recognize opportunities for improvement. This is largely a software development role, but domain experience with AI eval is essential for appreciating the hill‑climbing and data science workflows we need to support.
Soft skills will be important for gathering internal requirements, getting buy‑in for changes, and then developing documentation and training materials. You’ll work closely with ML data engineers and platform developers to help internal teams adopt an eval‑driven development process incorporating offline benchmark tests and online experiments.
As a Platform Engineer focused on evaluation, you’ll be expected to:
- Be fluent in the range of offline and online evaluation strategies, and when to apply the techniques over the lifecycle of development
- Have intuitions about how to specify eval pipelines succinctly using declarative syntax
- Understand the role of stratified datasets and ground truth labeling
- Appreciate the range of eval scoring schemes from human raters to automated LLMs‑as‑judge
We are a remote‑first organization and we strive to build using technology that is best suited to solving problems for our learners. Currently, we build with Go, Graph
QL, JavaScript, React & React Native, Redux and we adopt new technologies like LLMs when they’ll help us better achieve our goals. At Khan, one of our values is “Cultivate Learning Mindsets”, so for us, it’s important that we’re working with all of our engineers to help match the right opportunity to the right individual, in order to ensure every engineer is operating at their “learning edge”.
Currently, we are focused on providing equitable solutions to historically under‑resourced communities of learners and teachers, guided by our Engineering Principles. You can read about our latest work on our Engineering Blog. A few highlights:
- Incremental Rewrites with GraphQL
- Our Transition to React Native
- Go + Services = One Goliath Project
- How Engineering Principles Can Help You Scale
- How to upgrade hundreds of React components without breaking production
Required
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field
- 5 years of Software Engineering experience with 2+ of those years working on the evaluation of generative AI systems
- Strong programming skills in Go, Python, SQL, and at least one data pipeline framework (e.g., Airflow, Dagster, Prefect)
- Familiarity with the architecture of large language models and their industry‑standard APIs
Preferred
- Experience with labeling platforms (e.g., Label Studio, Scale AI, Toloka) and human‑in‑the‑loop concerns such as rubric development and inter‑rater agreement
- Exposure to MLOps practices such as model registry, feature store, or continuous…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).