Security Researcher Team Lead
Listed on 2026-02-12
-
IT/Tech
AI Engineer, Data Scientist, Data Science Manager
Lasso is on a mission to secure the use of LLMs in the real world protecting data, privacy, and businesses from AI threats. From the first line of code to protecting real-world data, every decision matters. If you're ready to shape the future of AI security - we would love to hear from you!
In this role, you will lead, manage, and mentor a team of security researchers while also designing and conducting technical research on LLM Security. You will collaborate with internal engineering, product, and ML teams to integrate innovative security capabilities into our platform. Furthermore, you will be responsible for setting the team’s strategic research agenda, managing project execution, and researching emerging cybersecurity threats and trends, influencing the company’s strategic direction with your expertise and insights in a dynamic environment.
Responsibilities:
- Lead, manage, and mentor a team of high-performing security researchers in the field of LLMs.
- Define and set the long-term research directions, strategies, and roadmaps to make our AI systems safer, more aligned, and more robust.
- Manage the execution of the research pipeline, ensuring timely delivery and high-quality results.
- Conduct in-depth research on AI-specific security threats, including adversarial attacks, model tampering, and data privacy issues.
- Collaborate with cross-functional teams (Engineering, Product, ML) to integrate AI security measures and innovative research outcomes into existing and new products.
- Work with the research team and ML experts on monitoring the latest developments and industry best practices on red‑teaming, advising on the long-term strategy.
- Develop research tools and frameworks to perform automatic analysis of LLMs and security products.
- Spearhead Lasso's AI Security research, initiating and leading projects, writing and publishing materials. A key part of this role involves presenting at major conferences and events.
Requirements:
- 5+ years of experience in security research e.g., AI/Machine Learning, App Sec/Cloud Security/Supply Chain/Red‑Teaming - a must.
- 2+ years of experience leading, managing, or mentoring a team of security researchers or engineers - a must.
- Strong coding skills in Python, with the ability to develop end-to-end POC for new security capabilities.
- Deep knowledge in security mechanisms, products, detection techniques.
- Excellent written communication and verbal skills (Hebrew and English).
- Passion for research, vulnerabilities, and AI.
- Team player, proactive, responsible, and well‑organized.
Nice to have:
- Demonstrated experience in conducting AI security research, specifically with LLM (Large Language Models) Adversarial attacks or Jail-breaking methods.
- Established track record of publishing papers or presenting at esteemed conferences and forums.
- Background in collaborating with Machine Learning and Data Science teams.
- Proficiency with SQL/NOSQL Databases.
Let's explore how we can lead to incredible things together. Submit your application today and let's make magic happen!
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).