×
Register Here to Apply for Jobs or Post Jobs. X

Research Engineer​/Research Scientist - Red Team; Alignment

Job in Greater London, London, Greater London, W1B, England, UK
Listing for: AI Security Institute
Full Time position
Listed on 2026-02-21
Job specializations:
  • Engineering
    AI Engineer
  • Research/Development
    Data Scientist
Salary/Wage Range or Industry Benchmark: 80000 - 100000 GBP Yearly GBP 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Research Engineer/Research Scientist - Red Team (Alignment)
Location: Greater London

Research Engineer/Research Scientist - Red Team (Alignment)

London, UK

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

Team Description

Risks from misaligned AI systems will grow in importance as AI systems become more capable, autonomous, and integrated into society. Understanding these risks and stress-testing mitigations is essential to ensuring advanced AI systems are developed and deployed safely and beneficially in the future.

The Alignment Red Team is a specialised sub-team within AISI's wider Red Team focused on detecting and evaluating misalignment in frontier AI systems. We perform novel research to develop techniques for finding misalignment, and pre- and post-deployment evaluations of frontier AI systems to understand loss‑of‑control risks associated with models, such as deceptive alignment, research sabotage, and self‑exfiltration attempts. We share our findings with frontier AI companies, the UK and allied governments, to inform their respective deployments, research, and policy‑making.

We also work directly with safety teams at frontier labs, sharing our evaluation findings to help improve their model alignment training and monitoring methodology.

  • Researching methods to automatically search for misalignment in frontier models, including misalignment related to loss‑of‑control risks such as research sabotage and self‑exfiltration.
  • Building and running alignment evaluations relevant for loss‑of‑control risks that current benchmarks don’t capture, such as research and decision sabotage, power‑seeking behaviour and deception.
  • Running pre‑deployment evaluations to test the alignment of AI systems, and analysing and reporting results to frontier AI companies and UK and allied governments.
  • Contributing to public‑facing research publications (like our published alignment evaluation case study) and technical reports that advance the field's understanding of alignment risks.
  • Designing and building software and tooling, including open‑source software, for better alignment evaluations, improving efficiency, realism, and usability.

The work could also involve:

  • Conducting threat modelling, analysis, and conceptual thinking to understand crucial model behaviours that could lead to loss of control (e.g. AI research assistants at frontier labs), translating abstract risk concepts into concrete, testable hypotheses.
  • Coordinating and producing holistic assessments of loss‑of‑control risk from the deployment of AI systems, or analysis of such assessments by frontier AI companies.
  • Mentoring and advising external collaborators and researchers to do work relevant to the team’s goals and alignment testing more broadly.
What We're Looking For

We're seeking Research Engineers and Research Scientists to join our Alignment Red Team. We are open to hires at junior, senior, staff and principal research scientist/engineer levels.

  • Ability to work autonomously on complex research projects involving substantial engineering. Have completed at least one substantial research project in AI safety, security or alignment involving substantial engineering, experiment design and analysis on frontier LLMs.
  • Strong software engineering and ML experience writing complex projects involving language models and ML, beyond just research code. 1+ years professional experience programming in Python for ML or SWE work.
  • Ability and experience writing clean, documented research code for machine learning experiments, including experience with ML frameworks like PyTorch or evaluation frameworks like Inspect. At least one substantial research or engineering project completed.
  • Proven ability in…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary