AI Agent Evaluation Analyst; Freelance
Listed on 2026-02-12
-
IT/Tech
Data Analyst, Data Scientist, AI Engineer
Overview
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe.
Who we’re looking forCurious and intellectually proactive contributors who double‑check assumptions and play devil’s advocate. Comfortable with ambiguity, complexity, and a flexible, remote, async opportunity. Ideal for analysts, researchers, consultants, and students (senior undergrads or grad students) seeking an intellectually interesting, part‑time, non‑permanent gig.
About the projectWe’re on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project you’ll balance quality assurance, research, and logical problem‑solving. This opportunity is ideal for people who enjoy looking at systems holistically and thinking through scenarios, implications, and edge cases.
Qualifications:
You do not need a coding background, but you must be curious, intellectually rigorous, and capable of evaluating the soundness and consistency of complex setups. Experience with consulting, case solving, or systems thinking is a plus.
- Review evaluation tasks and scenarios for logic, completeness, and realism
- Identify inconsistencies, missing assumptions, or unclear decision points
- Help define clear expected behaviors (gold standards) for AI agents
- Annotate cause‑effect relationships, reasoning paths, and plausible alternatives
- Think through complex systems and policies as a human would to ensure agents are tested properly
- Work closely with QA, writers, or developers to suggest refinements or edge case coverage
- Excellent analytical thinking: reasoning about complex systems, scenarios, and logical implications
- Strong attention to detail: spotting contradictions, ambiguities, and vague requirements
- Familiarity with structured data formats: reading JSON/YAML
- Ability to assess scenarios holistically: identify missing, unrealistic, or potentially breaking elements
- Good communication and clear writing in English to document findings
We also value applicants who have:
- Experience with policy evaluation, logic puzzles, case studies, or structured scenario design
- Background in consulting, academia, olympiads (logic/math/informatics), or research
- Exposure to LLMs, prompt engineering, or AI‑generated content
- Familiarity with QA or test‑case thinking (edge cases, failure modes, “what could go wrong”)
- Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.)
- Competitive pay up to $80/hr, depending on skills, experience, and project needs
- Flexible, remote, freelance project that fits around your primary commitments
- Participation in an advanced AI project and valuable experience for your portfolio
- Influence how future AI models understand and communicate in your field of expertise
Location:
Mississippi, United States. Remote. Compensation: $55,000.00–$75,000.00 annually.
- Type:
Part‑time (freelance) - Duration:
Project‑based, flexible schedule
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).