Research Scientist
Listed on 2026-02-23
-
Research/Development
Research Scientist, Data Scientist, Artificial Intelligence
About Us
FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response. Founded in July 2022, we have grown quickly to 40+ staff. We are uniquely positioned to conduct technical research at a scale surpassing academia and leveraging the research freedom of being a non-profit.
Our work is published at top conferences (e.g. NeurIPS, ICLR, ICML) and cited by leading media outlets such as the Financial Times, Nature News and MIT Technology Review.
FAR.AI uses three prongs working together to improve AI safety:
- FAR.Research - we conduct cutting-edge AI safety research in-house and dispense grants to support the wider research community.
- FAR.Futures - we bring together key policy makers, researchers and companies to drive change, such as the San Diego Alignment Workshop or the Guaranteed Safe AI research roadmap written with Yoshua Bengio.
- FAR.Labs - we host a co-working space in Berkeley to help incubate other AI safety organizations, currently housing 40 members.
We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Once the core research problems are solved, we work to scale them to a minimum viable prototype, demonstrating their validity to AI companies and governments to drive adoption.
We are aiming to rapidly grow our team in the following areas especially, at varying levels of seniority:
- Evals and red-teaming
:
Conducting pre- and post-release adversarial evaluations of frontier models (e.g. Claude 4 Opus, ChatGPT Agent, GPT-5); developing novel attacks to support this work; and exploring new threat models (e.g. persuasion, tampering risks). - Infrastructure: Maintaining GPU compute infrastructure to support experiments with open-weight models and developing new tooling to allow our research teams to scale their fine-tuning and post-training workflows to frontier open-weight models.
We are also seeking more senior candidates in the following research areas:
- Mitigating AI deception
:
Studying when lie detectors induce honesty or evasion, and developing for deception and sandbagging - Adversarial Robustness
:
Working to rigorously solve these security problems through building a science of security and robustness for AI, from demonstrating superhuman systems can be vulnerable, to scaling laws for robustness and jail breaking constitutional classifiers - Mechanistic Interpretability
:
Finding issues with Sparse Autoencoders, probing deception using Among Us, understanding learned planning in Soko Ban and interpretable data attribution.
FAR.AI is one of the largest independent AI safety research institutes, and is rapidly growing with the goal of diversifying and deepening our research portfolio. We would welcome the opportunity to add new research directions if you are a senior researcher with a strong vision and would like to pitch us on it.
About the RoleWe organize our team as Members of Technical Staff, with significant overlap between scientist and engineer roles. As a scientist, you will take ownership of and accelerate existing AI alignment research agendas. You can publish research findings broadly and engage with the AI alignment community. If you are an experienced research scientist, then we would be excited to incubate your agenda at FAR using our existing infrastructure and world-class team.
You will receive engineering mentorship via code review, pair programming and regular 1-to-1s. Alongside the engineers, you will be involved in developing scalable implementations of machine learning algorithms and using them to run scientific experiments.
You are encouraged to develop your research taste, proposing novel directions and joining a research pod which suits your interests. You are welcome to take time to study and to attend conferences free of charge. Our technical team is organized into research pods to enable continuity of organizational structure whilst each pod can pivot through varied research projects.
Beyond FAR.AI, you can work with national AI safety institutes, frontier model developers and top academics.
About YouWe are excited by unconventional backgrounds. You may have the following:
- New and under-explored AI alignment idea(s).
- Experience leading and/or playing a senior role in research projects related to machine learning.
- Ability to effectively communicate novel methods and solutions to both technical and non-technical audiences.
- PhD or several years research experience in computer science, artificial intelligence, machine learning or statistics.
If based in the USA, you will be an employee of FAR.AI, a 501(c)(3) research non-profit. Outside the USA, you will be an employee of an EoR organization on behalf of FAR.AI.
- Location: Both remote (global) and in-person (Berkeley, CA) are possible. We sponsor visas for…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).