×
Register Here to Apply for Jobs or Post Jobs. X

Staff Engineer, AI Evals

Job in Atlanta, Fulton County, Georgia, 30383, USA
Listing for: Sema4.ai, Inc.
Full Time position
Listed on 2026-02-16
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

The Opportunity

At Sema
4.ai

, we’re building an Enterprise AI Agent platform that fundamentally changes how knowledge work gets done—by enabling people and AI agents to collaborate in durable, trustworthy ways.

As a Staff Engineer, AI Evals
, you’ll design and own the evaluation systems that determine whether our agents are actually good: correct, reliable, efficient, and improving over time. You’ll build the measurement backbone that guides model choice, agent design, product decisions, and customer trust.

This is an early, high-impact role. You’ll be defining how we measure success for AI agents in production, where ambiguity is real, and ground truth can be messy. We’re looking for an engineer who brings rigor, judgment, and strong opinions about what “good” looks like, and who know how to operationalize it.

Who You Are

AI Systems & Evaluation Expert

You understand that AI systems are only as good as the way they’re measured. You’ve worked with LLMs and agentic systems in production and have seen how offline benchmarks, synthetic data, and human judgment can all fail in different ways. You know how to design evaluations that are meaningful, repeatable, and decision-useful, not just theoretically impressive.

You’re familiar with the sharp edges: non-determinism, prompt drift, regression risk, overfitting, data leakage, and the tension between fast iteration and statistical rigor.

In-Depth Technologist

You stay close to research and industry practice in evaluation, alignment, and reliability. You understand where automated metrics work, where they break down, and how to combine them with human review, golden datasets, and production signals. You bring creativity to building evaluation sets and scenarios, and in sourcing (or synthesizing) the data you need.

Builder With High Standards

You care deeply about correctness, clarity, and operational behavior. You can move fast, but you don’t confuse speed and rigor. You design eval systems that engineers trust, product relies on, and leadership uses to make decisions. You know when to build custom infrastructure and when to leverage existing tools without outsourcing critical thinking.

What You’ll Do

Build and Own the Evaluation Platform

Design, build, and operate Sema
4.ai’s core evaluation infrastructure for LLMs and agents: offline benchmarks, regression tests, task-level metrics, and production feedback loops. These systems will directly inform product launches, model upgrades, and customer requirements.

Define “Good” for Agents in Production

Work closely with agent, product, and field engineering teams to translate fuzzy goals around correctness, reliability, usefulness into concrete, measurable signals. You’ll help define success criteria for new capabilities and ensure we can detect regressions before customers do.

Tackle Ambiguous, High-Leverage Problems

Solve hard problems where the answer isn’t obvious:

  • How to evaluate long-running, multi-step agents
  • How to balance automated scoring with human judgment
  • How to measure improvement when tasks evolve
  • How to compare models under cost and latency constraints

Influence Technical and Product Direction

Use evaluation results to guide architectural decisions, model selection, and roadmap tradeoffs. You’ll participate in design reviews, set technical standards for eval rigor, mentor other engineers, and help interview senior technical candidates.

What You Bring
  • 7+ years of software engineering experience, including 2+ years building AI/ML systems in production
  • Deep experience with backend systems in Python, including data pipelines, observability, and reliability
  • Hands‑on experience evaluating LLM-based systems (agents, retrieval, tool use, workflows, etc.)
  • Strong intuition for metrics, experimentation, and failure analysis in non‑deterministic systems
  • Strong communication skills: whether you’re talking to colleagues, customers, or machines, you communicate clearly, concisely, and collaboratively
  • A high‑ownership mindset: you care deeply about the integrity of the systems you build and the decisions they inform
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary