Research Scientist, Applied Machine Learning Security; Agent Systems
Listed on 2026-02-28
-
IT/Tech
Cybersecurity, AI Engineer, Machine Learning/ ML Engineer
Staff Research Scientist, Applied Machine Learning Security (Agent Systems)
Cupertino, California, United States Software and Services
At Apple, we believe privacy is a fundamental human right. Our Security Engineering & Architecture (SEAR) organization is at the forefront of protecting billions of users worldwide, building security into every product, service, and experience we create. The SEAR ML Security Engineering team combines cutting-edge machine learning with world-class security engineering to defend against evolving threats at unprecedented scale. We're responsible for developing intelligent security systems for Apple Intelligence that protect Apple's ecosystem while preserving the privacy our users expect and deserve.
We're seeking a staff-level ML Security Research Scientist who operates at the intersection of applied research and production impact. You ll lead original security research on agentic ML systems deployed at scale—driving secure agentic design directly into shipping products, identifying real vulnerabilities in tool-using models and designing adversarial evaluations that reflect actual attacker behavior. You ll work at the boundary between research, platform engineering, and product security, translating findings into architectural decisions, launch requirements, and long-term hardening strategies that protect billions of users.
Your impact will be measured by risk reduction in production systems that ship.
This role focuses on applied security research for production ML systems, with an emphasis on agentic and tool-using models deployed will lead research efforts that surface real security risks in shipped or near-shipped systems, and you will drive mitigations that integrate cleanly into Apple’s ML platforms and products. You will operate at the boundary between research, platform engineering, and product security, conducting original research grounded in real system behavior and translating it into concrete design changes, launch requirements, and long-term hardening strategies.
Impact is measured by risk reduction in production, not theoretical results alone.
- Lead applied research on production agent systems:
Conduct original security research on deployed agentic ML systems that interact with tools, APIs, memory, workflows, and sensitive data. Identify and characterize vulnerabilities such as indirect prompt injection, tool misuse, privilege escalation, goal hijacking, and cross-context data leakage, and develop defenses validated under production constraints. - Design realistic adversarial evaluations:
Build and maintain adversarial testing frameworks that reflect real attacker incentives and system complexity, including multi-step, cross-tool, and persistence-based attacks that surface failure modes missed by standard evaluations. - Drive defenses into shipping systems:
Develop mitigations that are compatible with production requirements around latency, reliability, debuggability, and privacy. Influence architectural choices such as capability scoping, isolation boundaries, execution control, and runtime enforcement. - Own threat models for agent deployments:
Define trust boundaries and threat models for agentic ML across Apple platforms and services, and translate them into actionable security requirements and release criteria. - Bridge research and engineering:
Partner deeply with ML platform teams, product engineering, and product security to ensure research insights become design guidance, test infrastructure, and launch blockers where appropriate. - Provide technical leadership:
Set standards for applied ML security research, mentor other researchers, and influence how agent systems are reviewed, built, and released across the organization.
- Ph.D. or equivalent experience in machine learning, security, systems, or a related field.
- Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact.
- Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance.
- Experience researching or securing LLM-based or…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).