×
Register Here to Apply for Jobs or Post Jobs. X

Modeling​/Interpretability Research Scientist

Job in Stanford, Santa Clara County, California, 94305, USA
Listing for: Stanford University
Seasonal/Temporary, Contract position
Listed on 2026-01-03
Job specializations:
  • IT/Tech
    Data Scientist, Machine Learning/ ML Engineer
Job Description & How to Apply Below
Position: Modeling/Interpretability Research Scientist (1 Year Fixed Term)

Modeling/Interpretability Research Scientist (1 Year Fixed Term) at Stanford University summary:

The Modeling/Interpretability Research Scientist role at Stanford University's Enigma Project focuses on advancing mechanistic interpretability methods for large neural networks by developing scalable analysis pipelines and visualization tools. The position involves collaboration between neuroscience and machine learning disciplines to understand brain computation and align AI models with human neural activity. This fixed-term research role requires expertise in machine learning, software engineering, and neural data analysis.

The Enigma Project (enigma project.ai) is a research organization based in the Department of Ophthalmology at Stanford University School of Medicine, dedicated to understanding the computational principles of natural intelligence using the tools of artificial intelligence. Leveraging recent advances in neurotechnology and machine learning, this project aims to create a foundation model of the brain, capturing the relationship between perception, cognition, behavior, and the activity dynamics of the brain.

This ambitious initiative promises to offer unprecedented insights into the algorithms of the brain while serving as a key resource for aligning artificial intelligence models with human-like neural representations.

As part of this project, we seek talented individuals specializing in mechanistic interpretability to develop and deploy scalable pipelines for analyzing and interpreting these models, helping us understand how the brain represents and processes information. The role combines rigorous engineering practices with cutting-edge research in model interpretability, working at the intersection of neuroscience and artificial intelligence.

Role & Responsibilities:

• Design and implement scalable pipelines for mechanistic interpretability analyses of large neural networks

• Develop and automate feature visualization techniques to understand neural representations

• Build tools for circuit discovery and geometric analysis of population activity

• Create efficient, reproducible analysis workflows that can handle large-scale neural data

• Collaborate with neuroscientists and ML researchers to design and implement novel interpretability methods

• Maintain and optimize distributed computing infrastructure for running interpretability analyses

• Document and share findings through technical reports and visualization tools
• * - Other duties may also be assigned

What we offer:

• An environment in which to pursue fundamental research questions in AI and neuroscience

• A vibrant team of engineers and scientists in a project dedicated to one mission, rooted in academia but inspired by science in industry.

• Access to unique datasets spanning artificial and biological neural networks

• State-of-the-art computing infrastructure

Competitive salary and benefits package

Collaborative environment at the intersection of multiple disciplines

• Location at Stanford University with access to its world-class research community

• Strong mentoring in career development.

Application:
In addition to applying to the position, please send your CV and one-page interest statement to:

DESIRED QUALIFICATIONS:

Key qualifications:
Master's degree in Computer Science or related field with 2+ years of relevant industry experience, OR Bachelor's degree with 4+ years of relevant industry experience
Strong understanding of mechanistic interpretability techniques and research literature
Expertise in implementing and scaling ML analysis pipelines
Proficiency in Python and deep learning frameworks (i.e. PyTorch)

Experience with distributed computing and high-performance computing clusters
Strong software engineering practices including version control, testing, and documentation
Familiarity with visualization tools and techniques for high-dimensional data

Preferred qualifications:

Experience with feature visualization techniques (e.g., activation maximization, attribution methods)
Knowledge of geometric methods for analyzing neural population activity
Familiarity with circuit discovery techniques in neural networks

Experience with large-scale…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary