Research Scientist, Frontier Red Team; Emerging Risks San Francisco, CA
Listed on 2026-02-05
-
Research/Development
Data Scientist -
IT/Tech
Data Scientist, AI Engineer
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the TeamThe Frontier Red Team (FRT) is a technical research team within Anthropic’s Policy organization. Our goal is to make the entire world safer in this era of advanced AI by understanding what these systems can do and building the defenses that matter.
In 2026, we’re focused on researching and ensuring safety with self-improving, highly autonomous AI systems—especially ones with cyber-physical capabilities. See our previous related work on cyberdefense, robotics, and Project Vend. This is early‑stage, high‑conviction research with the potential for outsized impact.
About the RoleThis Research Scientist will focus on scoping, evaluating, red teaming, and defending against societal risks caused by advanced models that emerge over the next few years. Powerful AI models may have major implications for national security, running a business, power and privacy, infrastructure, social relationships, and more. They may come as a result of the increasing integration of powerful models in our economy and social sphere.
As an independent Research Scientist, you’ll build a research program to understand these Emerging Risks. You’ll build evals, run experiments, and look for real‑world signals to understand how these may come about. You’ll turn this into insights we can use to steer the development and use of the technology more positively. Compared to the team's other focuses, you will focus less on acute catastrophic risks and more on risks that emerge from increasing integration into our world.
What You’ll Do:- Design and run research experiments to understand the emerging risks models may create.
- Produce internal & external artifacts (research, products, demos, dashboards, tools) that communicate the state of model capabilities.
- Shape product, safeguards, and training decisions based on what you find.
- Work closely with Societal Impacts (SI) and Safeguards teams.
- Are a fast experimentalist who ships research quickly.
- Have experience creating a research program from scratch.
- Are thoughtful about humanity’s adaptation to powerful AI systems in our economy and society.
- Can communicate thoughtfully in written and spoken form with a wide range of stakeholders.
- Can scope ambiguous research questions into tractable first projects.
- Building & maintaining large, foundational infrastructure.
- Building simple interfaces that allow non‑technical collaborators to evaluate AI systems.
- Working with and prioritizing requests from a wide variety of stakeholders, including research and product teams.
$350,000 – $850,000 USD
LogisticsEducation requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location‑based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to get you a visa.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.
Safety notice: To protect yourself from potential scams, remember that Anthropic recruiters only contact you from email addresses. Legitimate recruiters never ask for money, fees, or banking information before your first day. If you’re ever unsure about a communication, don’t click any links—visit directly for confirmed position openings.
Equal Employment OpportunityAs set forth in Anthropic’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).