Manager, Technical Governance Team
Listed on 2025-12-01
-
Research/Development
Data Scientist
About MIRI
The Machine Intelligence Research Institute (MIRI) is a nonprofit based in Berkeley, California, focused on reducing existential risks from the transition to smarter-than-human AI. We've historically been very focused on technical alignment research. Since summer 2023, we have shifted our focus towards communication and AI governance. See our strategy update post for more details.
About the Technical Governance TeamWe are looking to build a dynamic and versatile team that can quickly produce a large range of research outputs for the technical governance space. Please feel free to fill out this form, or contact us focus on researching and designing technical aspects of regulations and policy that could lead to safe AI. The team works on:
Inputs into regulations, requests for comments by policy bodies (e.g. NIST/US AISI, EU, UN)
Technical research to improve international coordination
Limitations of current AI safety proposals and policies
Communicating with and consulting for policymakers and governance organizations
Our previous publications are available on our website if you would like to read them. See our research agenda for the kinds of future projects we are excited about.
About the RoleWe are primarily hiring for researchers but also interested in hiring a manager for the team.
In this role, you would manage a team working on the above areas, and have the opportunity to work on these areas directly. See here for the technical governance researcher job ad.
This role could involve the following, but we are open to candidates who want to focus on a subset of these responsibilities.
External stakeholder management, e.g., build and maintain relationships with policy makers and AI company employees (the target audience for much of our work)
Internal stakeholder management, e.g., interface with the rest of MIRI and ensure our work is consistent with broader MIRI goals, pre-publication review of the team’s outputs
Project management, e.g., track existing projects, motivate good work toward deadlines
People management, e.g., run future hiring rounds, fellowships
Bonus:
Research contributions, e.g., contributing to object level workIn the above work, maintain particular focus on what is needed for solutions to scale to smarter-than-human intelligence and conduct research on which new challenges may emerge at that stage
Most of the day-to-day work of the team is a combination of reading, writing, and meetings. Some example activities could include:
Threat modeling, working out how AI systems could cause large-scale harm, and hopefully what actions could be taken to prevent this
Responding to a US government agency’s Request for Comment
Learning about risk management practices in other industries, and applying these to AI
Designing and implementing evaluations of AI models, for example to demonstrate failure modes with current policy
Preparing and presenting informative briefings to policymakers, such as explaining the basics and current state of AI evaluations
Reading a government or AI developer’s AI policy document, and writing a report on its limitations
Designing new AI policies and standards which address the limitations of current approaches
There are no formal degree requirements to work on the team, however we are especially excited about applicants who have a strong background in AI Safety and have particular previous experience or familiarity working in (or as) one or more of:
- Compute governance. Technical knowledge of AI hardware / chips manufacturing and related governance proposals.
- Policy (including AI policy). Experience here could involve writing legislation or white papers, engaging with policy makers or other research in AI policy and governance
- Strong AI Safety generalist. For example, you have produced good AI safety research and have a good overview-level understanding of empirical, theory and conceptual approaches, or otherwise have a demonstrated ability to think clearly and carefully about AI safety.
- Bonus:
Research or engineering focused on frontier AI models or the AI tech stack. The role may involve creating or running model evaluations, benchmarking AI…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).