Security Engineer, AI Agent Security
Listed on 2026-02-21
-
Engineering
AI Engineer, Cybersecurity -
IT/Tech
AI Engineer, Cybersecurity
Overview
Google's Secure AI Framework (SAIF) team is at the forefront of AI Agent Security. You'll pioneer defenses for systems like Gemini and Workspace AI, addressing novel threats unique to autonomous agents and Large Language Models (LLMs), such as advanced prompt injection and adversarial manipulation. In this role, your responsibilities include researching vulnerabilities, designing innovative security architectures, prototyping mitigations, and collaborating to implement solutions.
This role requires security research/engineering skills, an attacker mindset, and systems security proficiency. You will help define secure development practices for AI agents within Google and influence the broader industry in this evolving field.
- Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs (e.g., advanced prompt injection, data exfiltration, adversarial manipulation, attacks on reasoning/planning).
- Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats, spanning model-based defenses, runtime controls, and detection techniques.
- Develop proof-of-concept exploits and testing methodologies to validate vulnerabilities and assess the effectiveness of proposed defenses and stay current within AI security, adversarial ML, and related security fields through literature review, conference attendance, and community engagement.
- Collaborate with engineering and research teams to translate research findings into practical, security solutions deployable across Google's agent ecosystem.
- Document research findings, contribute to internal knowledge sharing, security guidelines, and potentially external publications or presentations.
- Bachelor's degree or equivalent practical experience.
- 2 years of experience with security assessments or security design reviews or threat modeling.
- 2 years of experience with security engineering, computer and network security, and security protocols.
- 2 years of coding experience in one or more general purpose languages.
- Preferred:
Master’s or PhD in Computer Science or related field with specialization in Security, AI/ML, or related area. - Experience in AI/ML security research, including adversarial ML, prompt injection, model extraction, or privacy-preserving ML.
- Track record of security research contributions (e.g., publications, CVEs, conference talks, open-source tools).
- Familiarity with architecture and failure modes of LLMs and AI agent systems.
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.
See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies:
Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: