×
Register Here to Apply for Jobs or Post Jobs. X

Lead Security Engineer - AI​/ML

Job in Bowling Green, Warren County, Kentucky, 42103, USA
Listing for: J.P. Morgan
Full Time position
Listed on 2026-02-19
Job specializations:
  • IT/Tech
    AI Engineer, Cybersecurity, Systems Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Take on a crucial role whereyou'llbe a key part of a high-performing team delivering secure software solutions. Make a real impact as you help shape the future of software security at one of the world's largest and most influential companies.

As a Lead Security Engineer at JPMorgan Chase within the Cybersecurity & Technology Controls for AI/ML, you are an integral part of team that works to deliver software solutions that satisfy pre-defined functional and user requirements with the added dimension of preventing misuse, circumvention, and malicious behavior. As a core technical contributor, you are responsible for carrying out critical technology solutions with tamper-proof, audit defensible methods across multiple technical areas within various business functions.

Job Responsibilities
  • Develop and enhance security strategies, red teaming programs, and solution designs, while troubleshooting technical issues and creating scalable solutions.
  • Design secure, high-quality AI and software architectures, reviewing and challenging designs and code to ensure adversarial resilience.
  • Reduce AI and LLM security vulnerabilities by adhering to industry standards and emerging AI safety research, evolving policies, testing protocols, and controls.
  • Collaborate with stakeholders across product, data science, cyber, legal, and risk to understand AI use cases and recommend modifications during periods of heightened vulnerability or regulatory change.
  • Conduct discovery, threat modeling, and adversarial testing on generative AI, RAG pipelines, and ML systems to identify vulnerabilities such as prompt injection, jail breaking, and data poisoning.
  • Provide guidance on secure design, logging, monitoring, and compensating controls for AI applications and platforms.
  • Define and implement AI red teaming methodologies, playbooks, and success metrics, establishing mechanisms for continuous testing and safe rollout of new AI models and features.
  • Work with platform and cloud security teams to ensure secure infrastructure configuration and alignment with enterprise security architecture.
  • Engage with external researchers, vendors, and standards bodies to track emerging AI threats and bring best practices into the organization.
  • Foster a team culture of diversity, equity, inclusion, and respect.
  • Collaborate within a cross-functional team to develop relationships, influence senior stakeholders, and drive alignment on AI risk tolerance and mitigation priorities.
Required Qualifications , Capabilities, and Skills
  • Formal training or certification in Public Cloud environment concepts and advanced hands-on experience with cloud-native AI services (e.g., Bedrock).
  • Experience with threat modeling, discovery, vulnerability, and penetration testing (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) and foundational cybersecurity concepts such as IAM, Authentication, OIDC, SAML.
  • Practical experience with Infrastructure as Code (IaC) solutions like Terraform and Cloud Formation.
  • Proficiency in Python scripting.
  • Strong understanding of AI/ML concepts and trends, with knowledge of AI red teaming foundational concepts to design and implement exercises for complex AI architectures.
  • Ability to conceptualize, design, validate, and communicate creative technical solutions to enterprise-level security problems, including building internal tools, dashboards, and automation for red teaming activities.
Preferred Qualifications , Capabilities, and Skills
  • Expertise in planning, designing, and implementing AI red teaming exercises and enterprise-level security solutions for generative AI, LLMs, and ML systems.
  • Experience with specialized AI security/red teaming tools and frameworks (e.g., PyRIT, Garak, custom LLM evaluation harnesses) and contributions to AI security or open-source security projects.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary