Research Scientist, Interpretability
Listed on 2026-03-01
-
Engineering
AI Engineer, Research Scientist, Systems Engineer
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:When you see what modern language models are capable of, do you wonder, "How do these things work? How can we trust them?"
The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe. We’re looking for researchers and engineers to join our efforts.
People mean many different things by "interpretability". We're focused onmechanistic interpretability, which aims to discover how neural network parameters map to meaningful algorithms. Some useful analogies might be to think of us as trying to do "biology" or "neuroscience" of neural networks, or as treating neural networks as binary computer programs we're trying to "reverse engineer".
We aim to create a solid foundation for mechanistically understanding neural networks and making them safe (see our vision post ). In the short term, we have focused on resolving the issue of "superposition" (see Toy Models of Superposition , Superposition, Memorization, and Double Descent , and our May 2023 update ), which causes the computational units of the models, like neurons and attention heads, to be individually uninterpretable, and on finding ways to decompose models into more interpretable components.
Our recent work finding millions of features on Sonnet, one of our production language models, represents progress in this direction. This is a stepping stone towards our overall goal of mechanistically understanding neural networks.
A few places to learn more about our work and team at a high level are this introduction to Interpretability from our research lead, Chris Olah ; a discussion of our work on the Hard Fork podcast produced by the New York Times, and this blog post (and accompanying video) sharing more about some of the engineering challenges we’d had to solve to get these results.
Some of our team's notable publications include A Mathematical Framework for Transformer Circuits , In-context Learning and Induction Heads , and Toy Models of Superposition . This work builds on ideas from members' work prior to Anthropic such as the original circuits thread , Multimodal Neurons , Activation Atlases , and Building Blocks .
We aim to create a solid foundation for mechanistically understanding neural networks and making them safe (see our vision post ). In the short term, we have focused on resolving the issue of "superposition" (see Toy Models of Superposition , Superposition, Memorization, and Double Descent , and our May 2023 update ), which causes the computational units of the models, like neurons and attention heads, to be individually uninterpretable, and on finding ways to decompose models into more interpretable components.
Our recent work finding millions of features on Sonnet, one of our production language models, represents progress in this direction. This is a stepping stone towards our overall goal of mechanistically understanding neural networks.
If you would be especially excited to work on a project that touches upon the intersection of Interpretability and another team, feel free to note down the specific team(s) you’d be interested in collaborating with.
Responsibilities:
- Develop methods for understanding LLMs by reverse engineering algorithms learned in their weights
- Design and run robust experiments, both quickly in toy scenarios and at scale in large models
- Build…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).