More jobs:
Job Description & How to Apply Below
Rate per hour : INR 450
Minimum Hour Commitment per day : 5-6 hours per day
Key Responsibilities:
Evaluate AI model outputs in Malayalam.
Identify and flag toxic, harmful or hate-based content, including subtle or context-dependent cases
Compare model responses and provide performance assessments based on predefined criteria
Classify the type and severity of toxicity, e.g. hate speech, harassment, abusive language
Provide brief explanations for flagged items where required
Ensure consistency, accuracy and adherence to project guidelines
Qualifications/
Required Skills:
Proficient in English and Malayalam .
Minimum 1 year of experience in content writing, content moderation, linguistic evaluation or a similar domain
Strong understanding of cultural nuances, slang and context-dependent expressions
Ability to identify toxicity in both native script and transliterated formats.
Good analytical and evaluative skills
Prior experience in content moderation, linguistic evaluation or data annotation is a plus
Education
Minimum of a bachelor's degree in any field, i.e. Humanities, Linguistics, Mass Communication or related disciplines.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×