Generative AI Associate - Red Teaming Specialist
Atlanta, Fulton County, Georgia, 30383, USA
Listed on 2026-01-02
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer
Job Title: Generative AI Associate - Red Teaming (English)
Location: Fully Remote within the U.S. (excluding Alaska, California, Colorado, Nevada and Puerto Rico)
Employment Type: Full Time, 1 Month Contract Role (Full Time, 40 hours weekly, for 4 weeks.)
Who we are:Innodata (NASDAQ: INOD) is a leading data engineering company. With more than 2,000 customers and operations in 13 cities around the world, we are an AI technology solutions provider‑of‑choice for 4 out of 5 of the world’s biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.
By combining advanced machine learning and artificial intelligence (ML/AI) technologies, a global workforce of subject matter experts, and a high‑security infrastructure, we’re helping usher in the promise of AI. Innodata offers a powerful combination of both digital data solutions and easy‑to‑use, high‑quality platforms.
Our global workforce includes over 5,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany. We’re poised for a period of explosive growth over the next few years.
About the RoleAt Innodata, we’re working with the world’s largest technology companies on the next generation of generative AI and large language models (LLMs). We’re looking for smart, savvy, and curious Red Teaming Specialists to join our team.
This is the role that writers and hackers dream about: you’ll be challenging the next generation of LLMs to ensure their robustness and reliability. We’re testing generative AI to think critically and act safely, not just to generate content.
This isn’t just a job: it’s a once‑in‑a‑lifetime opportunity to work on the front lines of AI safety and security. There’s nothing more cutting‑edge than this. Joining us means becoming an integral member of a global team dedicated to identifying vulnerabilities and improving the resilience of AI systems. You’ll be creatively crafting scenarios and prompts to test the limits of AI behavior, uncovering potential weaknesses and ensuring robust safeguards.
You’ll be shaping the future of secure AI‑powered platforms, pushing the boundaries of what’s possible. Keen to learn more?
As a Red Teaming Specialist on our AI Large Language Models (LLMs) team, you will be joining a truly global team of subject matter experts across a wide variety of disciplines and will be entrusted with a range of responsibilities. We’re seeking self‑motivated, clever, and creative specialists who can handle the speed required to be on the front lines of AI security.
In return, we’ll be training you in cutting‑edge methods of identifying and addressing vulnerabilities in generative AI. Below are some responsibilities and tasks of our Red Teaming Specialist role:
- Complete extensive training on AI/ML, LLMs, Red Teaming, and jail breaking, as well as specific project guidelines and requirements
- Craft clever and sneaky prompts to attempt to bypass the filters and guardrails on LLMs, targeting specific vulnerabilities defined by our clients
- Collaborate closely with language specialists, team leads, and QA leads to produce the best possible work
- Assist our data scientists to conduct automated model attacks
- Adapt to the dynamic needs of different projects and clients, navigating shifting guidelines and requirements
- Keep up with the evolving capabilities and vulnerabilities of LLMs and help your team’s methods evolve with them
- Hit productivity targets, including for number of prompts written and average handling time per prompt
- A Bachelor’s degree or Associates degree with minimum 1 year of relevant industry experience. Advanced degrees are strongly preferred (Master’s or PhD)
- Professional or Expert level proficiency (C1/C2) in English.
- Strong understanding of grammar, syntax, and semantics – knowing what "proper" English rules are, as well as when to violate them to better test AI responses
Please note:
As a Red Teaming Specialist, you’ll push the boundaries of large language models and seek to expose their vulnerabilities. In this work, you may be dealing with material that is toxic…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).