×
Register Here to Apply for Jobs or Post Jobs. X

Security Engineer, Offensive Security Remote San Francisco, CA | S

Remote / Online - Candidates ideally in
San Francisco, San Francisco County, California, 94199, USA
Listing for: Aisafety
Remote/Work from Home position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Cybersecurity, AI Engineer, Systems Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below
Position: Security Engineer, Offensive Security Remote-Friendly (Travel-Required) | San Francisco, CA | S[...]

About Anthropic

Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Team

The Security Engineering team’s mission is to safeguard our AI systems and maintain the trust of our users and society ther we’re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organization and keeping the safety and trust of our users at the forefront of everything we do.

What You’ll Do:
  • Conduct red and purple team engagements simulating advanced threat actors across our cloud infrastructure, endpoints and bare metal deployments.
  • Penetration test specific, high value deployments.
  • Contribute to AI-assisted security testing tooling and workflows.
  • Work cross functionally with other security and engineering teams, particularly on AI-specific attack scenarios.
  • Document and present findings to technical and executive audiences, translating attack narratives into actionable risk insights that inform security roadmaps.
Who You Are:
  • 5+ years of hands-on experience in red teaming and offensive security operations
  • Deep expertise in at least two of: macOS security, Linux Security, Cloud security (GCP/AWS/Azure), Kubernetes, CI/CD pipelines
  • Track record of discovering novel attack vectors and chaining vulnerabilities creatively
  • Experience conducting adversarial simulations against well-defended environments
  • Strong engineering skills (Python, Go, or similar)
  • Ability to write clear findings that drive action, helping teams understand risk and prioritize fixes
  • Collaborative approach, working in close collaboration with the blue team
Strong candidates may also have experience with:
  • Prior work at organizations with state actor threat models
  • Interest in AI safety and how security engineering contributes to responsible AI developments
  • Background testing AI/ML systems or agentic workflows
  • Familiarity with detection engineering and SIEM/EDR platforms from the defensive side
  • Experience with data center security or hardware-based attacks

Final date to receive applications: None. Applications will be received on a rolling basis.

Logistics

Education requirements: We require at least a Bachelor’s degree in a related field or equivalent experience.

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas. However, we aren’t able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. We think AI systems with social and ethical implications make representation important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from emails at  In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate recruiters will never ask for money, fees, or banking information before your first day.

If unsure about a communication, don’t click any links—visit  directly for confirmed openings.

How we’re different

We believe that the highest-impact AI research will be big science. We work as a single cohesive team on a few large-scale research efforts and value impact over narrow results. We aim to be an empirical science, with emphasis on collaboration and communication skills.

The easiest way to…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary