More jobs:
Job Description & How to Apply Below
Primary
Title:
AI Content Reviewer
Industry & Sector: A technology company operating in the AI-powered content moderation, conversational intelligence, and trust & safety sector that provides high-quality training data and moderation services to improve generative AI, chatbots, and customer experience platforms. We deliver scalable, policy-driven review workflows and annotation pipelines to ensure safe, compliant, and reliable AI behavior.
Location:
Remote (India)
Role & Responsibilities
Review and classify user-generated and model-generated content against established moderation policies and safety guidelines to ensure compliance and reduce risk.
Apply nuanced judgments on edge cases including toxicity, hate speech, harassment, sexual content, and misinformation with consistent, documented rationale.
Annotate and label content for training datasets using internal tools, following detailed instructions to improve model accuracy and reduce bias.
Perform quality checks and calibration tasks; provide feedback to annotation teams and escalate ambiguous cases to subject-matter experts.
Document policy gaps, propose refinements, and contribute to updating moderation playbooks and decision trees.
Maintain productivity and accuracy targets while adapting to evolving policy changes and new content formats (text, images, audio).
Skills & Qualifications
Must-Have
Content moderation
Policy enforcement
Toxicity classification
Hate speech detection
Misinformation detection
Data annotation
Preferred
Quality assurance
Annotation tools
Google Sheets
Benefits & Culture Highlights
Fully remote role with flexible hours to support work-life balance across India.
Opportunity to shape AI safety and content policy for production-grade generative systems.
Collaborative, fast-paced environment with ongoing training and cross-functional exposure to ML, product, and trust teams.
About the
Employer:
Mindtel is hiring dedicated reviewers who care about safe and ethical AI. If you bring strong attention to detail, an ability to apply complex policies consistently, and a desire to improve AI outcomes, this role offers direct impact on model behavior and user safety.
How to apply:
Submit your resume and a brief note describing any moderation or annotation experience and why you're interested in AI content safety.
Skills:
content moderation,google sheets,training,quality assurance,calibration,annotation,speech,data
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×