More jobs:
Job Description & How to Apply Below
We are seeking a Senior/Principal AI Security Expert with exceptional expertise in securing artificial intelligence systems across the entire development lifecycle. The ideal candidate will combine deep knowledge of classical machine learning security with advanced understanding of Transformer-based model vulnerabilities and defenses. This position requires someone who can think like an attacker while building defensive systems, and who is passionate about creating secure, responsible AI systems.
This is not a purely academic or managerial role. You will design experiments, write code, review models, evaluate risks, and ship systems, while also guiding technical direction and elevating the team’s capabilities.
Key ResponsibilitiesAI Security Strategy & Architecture
- Design security architectures for AI systems including threat modeling, vulnerability assessment, and risk mitigation frameworks
- Lead security reviews and audits of AI models and systems throughout the development lifecycle
- Collaborate with product, engineering, and data science teams to integrate security into AI development processes
- Develop and maintain adversarial attack methodologies and tools for proactive security testing
- Lead red teaming exercises to identify and exploit vulnerabilities in AI models and systems
- Conduct adversarial research to discover novel attack vectors against classical ML and Transformer models
- Research and implement techniques to enhance the robustness of AI models against adversarial attacks
- Evaluate and implement defenses against common attack types (evasion, poisoning, extraction, membership inference)
- Design and implement guardrails for AI systems to prevent harmful outputs and unsafe behaviors
- Design context injection prevention mechanisms and input validation frameworks
- Research and implement defenses against prompt injection, context confusion, and semantic attacks
- Implement monitoring systems to detect anomalous model behavior and potential security incidents
- PhD in Computer Science, Cybersecurity, AI, or related field with demonstrated research in AI security, adversarial ML, or related areas (or Master’s degree with 10+ years of professional experience)
- Minimum 10+ years of professional experience in cybersecurity, machine learning, or AI security
- At least 8+ years in a senior or lead position with demonstrated technical leadership in security
- Proven track record of identifying and remediating critical security vulnerabilities in AI systems
- Published research or significant contributions to AI security field
- Deep expertise in adversarial machine learning and attack methodologies against classical ML models, advanced knowledge of Transformer architecture vulnerabilities and attack vectors
- Hands‑on experience with adversarial attack frameworks and tools (FGSM, PGD, C&W, etc.)
- Proficiency in Python and deep learning frameworks (PyTorch, Tensor Flow), strong understanding of cryptography, secure computation, and privacy‑preserving techniques, prompt injection, jailbreak, and context confusion attack vectors
- Proficiency with security testing tools and vulnerability assessment methodologies
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×