AI Security Systems Architect
Listed on 2026-02-19
-
IT/Tech
AI Engineer, Cybersecurity
We are seeking an AI Security Systems Architect to design and develop state‑of‑the‑art systems for security testing and evaluation of artificial intelligence technologies. This role involves creating scalable infrastructure to support cutting‑edge adversarial testing methodologies, such as red team vs. blue team exercises and AI‑on‑AI evaluation frameworks.
The ideal candidate will bring a strong foundation in systems architecture, a working knowledge of cluster computing and scaling, and a passion for advancing the security of AI systems under real‑world and simulated conditions. This position is critical for ensuring that AI systems remain resilient, robust, and secure against evolving threats. This person will play a key role within ORNL’s Center for AI Security Research (CAISER) where he or she will work to advance the state‑of‑the‑art in Automated, Agentic workflows for AI Security research, testing and evaluation.
Key Responsibilities- Design and Development for Security Testing
- Architect and implement scalable systems tailored specifically for security testing and evaluation of AI systems.
- Develop frameworks to support red/blue team exercises in simulated environments, enabling manual and automated adversarial testing at scale.
- Build and integrate AI‑on‑AI testing infrastructures, where AI models can actively challenge each other in adversarial contexts to detect vulnerabilities or weaknesses.
- Scalability and Cluster Computing
- Design distributed systems that support high‑throughput simulations and stress‑testing of AI systems under adversarial conditions.
- Implement cluster computing solutions to efficiently scale testing environments supporting large datasets and high‑performance AI workloads.
- Optimize resource allocation for simultaneous testing tasks and real‑time tracking of security metrics.
- Adversarial and Threat Modeling Infrastructure
- Develop systems to automate the generation and execution of diverse adversarial testing scenarios, including techniques for perturbation, poisoning, and evasion attacks.
- Design platforms for threat modeling in AI systems, enabling comprehensive vulnerability assessments tailored to diverse use cases, from cloud‑hosted models to edge deployments.
- Enable rapid prototyping and iteration for adversarial defenses integrated into the architectural design.
- Collaboration and Security Validation
- Work closely with security specialists, AI researchers, and Dev Sec Ops teams to evaluate and validate the security of AI systems aligned with organizational security standards.
- Partner with stakeholders to design customized testing environments that simulate real‑world attack and defense scenarios in production‑like conditions.
- Leadership and Innovation
- Lead cross‑functional initiatives focused on advancing the security testing capabilities for next‑generation AI systems.
- Stay informed of emerging adversarial AI threats, testing methodologies, and scaling innovations to foster continuous improvement in security testing architectures.
- Mentor junior engineers and provide technical leadership in AI security evaluation mechanisms.
- Master’s Degree in Computer Science, Computer Engineering, Cybersecurity, or related fields with 7-10 years of experience or PhD in Computer Science, Computer Engineering, Cybersecurity, or related fields with 2-4 years of experience.
- Proven experience architecting and implementing complex distributed systems tailored for security testing or evaluation at scale.
- Demonstrated expertise in cluster computing and scaling for high‑performance environments, with hands‑on experience in frameworks such as Hadoop, Spark, or Kubernetes.
- Familiarity with techniques for AI‑on‑AI adversarial evaluation, including reinforcement learning‑based adversarial testing setups.
- Expertise in designing systems that support red/blue team operations alongside Dev Sec Ops integrations.
- Knowledge of privacy‑preserving AI methods, secure federated learning, and cryptographic protections.
- Research or publication experience in adversarial testing, distributed systems, and AI system security.
- Experience in supporting continuous integration…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).