×
Register Here to Apply for Jobs or Post Jobs. X

Senior AI Security Assurance Engineer

Job in Augusta, Kennebec County, Maine, 04338, USA
Listing for: Zoom
Full Time position
Listed on 2026-02-21
Job specializations:
  • IT/Tech
    Cybersecurity, AI Engineer, Security Manager, Systems Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

What you can expect

We are seeking a Senior AI Security Assurance Engineer to lead the offensive verification of our AI systems and pipelines. This role will serve as the dedicated AI security lead within the Security Assurance organization, reporting directly to the Head of Security Assurance. While not a traditional covert red team position, the role requires similar deep adversarial thinking and the ability to evaluate AI systems with the mindset of a determined skeptic.

Your mission: verify and innovate. You’ll be responsible for independently evaluating, challenging, and validating the security, safety, and integrity of all AI initiatives across the company, including AI embedded in products, internal AI use cases, training pipelines, model lifecycle management, and supporting infrastructure. This is not a compliance role, it’s a hands‑on, experimental one.

At Zoom, Security Assurance encompasses Offensive Security (product, infrastructure, hardware, red team), PSIRT, Product Vulnerability Management, and Bug Bounty. This role will also operate across all of these domains as the organization’s primary authority on AI‑related risk and capabilities/implementation. You will be the AI expert in efforts to develop scalable, intelligent systems that automate and amplify Security Assurance ’ll both break and build, challenging assumptions in our AI infrastructure, features, and tools while creating tools to continuously expose and mitigate critical risk.

About

the Team

The Security Assurance team at Zoom is an adversarial, high‑leverage group focused on finding and reducing the company’s most critical security risks. We work from an attacker’s mindset and operate well beyond checklists, audits, and standard SDLC gates, targeting the vulnerabilities and systemic failures that escape existing controls. The team covers offensive security (vulnerability research, red teaming, hardware security), PSIRT, product vulnerability management, bug bounty, and emerging AI security.

We apply deep technical rigor and clear risk judgment to drive concrete product and platform changes. We value evidence over assumptions and curiosity over comfort. This team is for truth‑seekers who want their work to measurably reduce risk at global scale.

Responsibilities
  • Leading adversarial verification of AI systems:
    Design and execute deep, unconstrained assessments of AI models, pipelines, and agents, testing guardrails, safety layers, and data boundaries through offensive experimentation.
  • Uncovering gaps between promise and practice:
    Identify where AI security, safety, or privacy controls fail under pressure. Surface the mismatch between claims and reality.
  • Assessing the full AI lifecycle:
    Evaluate data, training, and deployment pipelines for risks like model poisoning, prompt injection, or fine‑tuning abuse.
  • Developing AI‑powered security discovery systems:
    Research, prototype, and operationalize machine learning–driven approaches to automatically detect, predict, and prioritize vulnerabilities and behavioral deviations in Zoom’s products and platform.
  • Automating and scaling offensive operations:
    Build AI‑based frameworks to scale red teaming, vulnerability discovery, and bug bounty triage. Use LLMs, anomaly detection, and pattern learning to enhance automation and coverage.
  • Adapting cutting‑edge research:
    Integrate the latest findings from offensive security research, autonomous agents, and AI‑driven vulnerability analysis into Zoom’s security assurance programs.
  • Shaping AI security methodologies:
    Build frameworks for continuous AI‑driven adversarial testing, automated validation, and system monitoring that scale across teams and products.
  • Translating findings into impact:
    Communicate verified risks and systemic weaknesses clearly to engineering and leadership, pairing technical insight with strategic direction.
  • Staying ahead of the curve:
    Track evolving AI architectures, attack vectors, and defenses, turning new research into offensive and defensive capability.
What we’re looking for
  • Have deep understanding of generative AI systems (transformers, diffusion models, multi‑agent frameworks) and their security failure modes.
  • Have…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary