Remote AI Red-Teamer — Adversarial AI Testing; Advanced); English & Arabic - AI Trainer
Beavercreek, Greene County, Ohio, USA
Listed on 2026-02-28
-
IT/Tech
AI Engineer, Cybersecurity
Location
Remote; geography restricted to USA, Egypt, Saudi Arabia, UAE.
Employment TypeFull-time or part-time contract work.
Language RequirementsNative-level fluency in English & Arabic.
Role OverviewAt Mercor, we believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project—human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors.
All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.
- Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation.
- Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks.
- Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent.
- Document reproducibly: produce reports, datasets, and attack cases customers can act on.
- Prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing).
- Curiosity and adversarial mindset: instinctively push systems to breaking points.
- Structured approach: use frameworks or benchmarks, not just random hacks.
- Communicative: explain risks clearly to technical and non-technical stakeholders.
- Adaptable: thrive on moving across projects and customers.
- Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction.
- Cybersecurity: penetration testing, exploit development, reverse engineering.
- Socio-technical risk: harassment/disinformation probing, abuse analysis, conversational AI testing.
- Creative probing: psychology, acting, writing for unconventional adversarial thinking.
- Uncover vulnerabilities that automated tests miss.
- Deliver reproducible artifacts that strengthen customer AI systems.
- Expand evaluation coverage: more scenarios tested, fewer surprises in production.
- Build Mercor customers' trust in AI safety through adversarial probing.
Build experience in human data-driven AI red teaming at the frontier of safety. Play a direct role in making AI systems more robust, safe, and trustworthy.
CompensationCompetitive rates commensurate with experience. Rates aligned with level of expertise, sensitivity of material, and scope of work.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).