×
Register Here to Apply for Jobs or Post Jobs. X

Research Scientist, Frontier Red Team; Emerging Risks San Francisco, CA

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Anthropic
Full Time position
Listed on 2026-02-05
Job specializations:
  • Research/Development
    Data Scientist
  • IT/Tech
    Data Scientist, AI Engineer
Salary/Wage Range or Industry Benchmark: 350000 USD Yearly USD 350000.00 YEAR
Job Description & How to Apply Below
Position: Research Scientist, Frontier Red Team (Emerging Risks) San Francisco, CA

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Team

The Frontier Red Team (FRT) is a technical research team within Anthropic’s Policy organization. Our goal is to make the entire world safer in this era of advanced AI by understanding what these systems can do and building the defenses that matter.

In 2026, we’re focused on researching and ensuring safety with self-improving, highly autonomous AI systems—especially ones with cyber-physical capabilities. See our previous related work on cyberdefense, robotics, and Project Vend. This is early‑stage, high‑conviction research with the potential for outsized impact.

About the Role

This Research Scientist will focus on scoping, evaluating, red teaming, and defending against societal risks caused by advanced models that emerge over the next few years. Powerful AI models may have major implications for national security, running a business, power and privacy, infrastructure, social relationships, and more. They may come as a result of the increasing integration of powerful models in our economy and social sphere.

As an independent Research Scientist, you’ll build a research program to understand these Emerging Risks. You’ll build evals, run experiments, and look for real‑world signals to understand how these may come about. You’ll turn this into insights we can use to steer the development and use of the technology more positively. Compared to the team's other focuses, you will focus less on acute catastrophic risks and more on risks that emerge from increasing integration into our world.

What You’ll Do:
  • Design and run research experiments to understand the emerging risks models may create.
  • Produce internal & external artifacts (research, products, demos, dashboards, tools) that communicate the state of model capabilities.
  • Shape product, safeguards, and training decisions based on what you find.
  • Work closely with Societal Impacts (SI) and Safeguards teams.
Sample Projects:
  • Build, run, and study an autonomous AI‑powered business (e.g., Project Vend), then identify the growth of real autonomous businesses in the wild using Clio and other tools.
  • Build a benchmark for a model’s national security capabilities.
  • Red team unsafeguarded models’ abilities to be used for control.
  • Identify indicators of models being used to scale movements that rely on social control.
  • You May Be a Good Fit If You:
    • Are a fast experimentalist who ships research quickly.
    • Have experience creating a research program from scratch.
    • Are thoughtful about humanity’s adaptation to powerful AI systems in our economy and society.
    • Can communicate thoughtfully in written and spoken form with a wide range of stakeholders.
    • Can scope ambiguous research questions into tractable first projects.
    Strong candidates may also have experience with:
    • Building & maintaining large, foundational infrastructure.
    • Building simple interfaces that allow non‑technical collaborators to evaluate AI systems.
    • Working with and prioritizing requests from a wide variety of stakeholders, including research and product teams.
    Compensation Range

    $350,000 – $850,000 USD

    Logistics

    Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

    Location‑based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

    Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to get you a visa.

    We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.

    Safety notice: To protect yourself from potential scams, remember that Anthropic recruiters only contact you from  email addresses. Legitimate recruiters never ask for money, fees, or banking information before your first day. If you’re ever unsure about a communication, don’t click any links—visit  directly for confirmed position openings.

    Equal Employment Opportunity

    As set forth in Anthropic’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

    #J-18808-Ljbffr
    To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
    (If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
     
     
     
    Search for further Jobs Here:
    (Try combinations for better Results! Or enter less keywords for broader Results)
    Location
    Increase/decrease your Search Radius (miles)

    Job Posting Language
    Employment Category
    Education (minimum level)
    Filters
    Education Level
    Experience Level (years)
    Posted in last:
    Salary