Evaluation & Insights Engineer
Listed on 2025-12-18
-
IT/Tech
Data Scientist, Data Analyst
Cupertino, California, United States Software and Services
Imagine what you could do here. At Apple, great new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish!
Are you passionate about music, movies, and the world of Artificial Intelligence and Machine Learning? So are we! Join our Human-Centered AI team for Apple Products. In this role, you'll represent the user perspective on new features, review and analyze data, and evaluate AI models powering everything from search and recommendations to other innovative features. Collaborate with Data Scientists, Researchers, and Engineers to drive improvements across our platforms.
We are looking for an Evaluation & Insights Engineer for the Human-Centered AI team to help evaluate and improve AI systems by combining data science, model behavior analysis, and qualitative insights. In this role, you will analyze AI outputs, develop evaluation frameworks, design qualitative, and translate findings into actionable improvements for product and engineering teams. This role blends deep technical expertise with strong analytical judgment to assess, interpret, and improve the behavior of advanced AI models.
You will work cross-functionally with the Engineering and Project Managers, Product, and Research teams to ensure that AI experience is reliable, safe, and aligned with human expectations.
- AI Evaluation & Data Analysis
- Lead complex evaluations of model behavior, identifying issues in reasoning, factuality, interaction quality, safety, fairness, and user alignment.
- Build evaluation datasets, annotation schemas, and guidelines for qualitative assessments.
- Develop qualitative + semi-quantitative scoring rubrics for measuring human‑perceived quality (e.g., helpfulness, factuality, clarity, trustworthiness).
- Run structured evaluations of model iterations and summarize strengths/weaknesses based on qualitative evidence.
- Data Science & Modeling
- Collaborate with model developers to refine model behavior using findings from qualitative outputs.
- Use statistical and computational methods to identify patterns in qualitative data (e.g., assigning loss patterns, error taxonomies, thematic categorization).
- Build dashboards, scripts, or workflows that codify evaluation metrics and automate portions of qualitative assessments.
- Integrate qualitative evaluations with quantitative metrics (e.g., Precision@k, MRR, perplexity, accuracy, performance KPIs).
- Create scalable pipelines for reviewing, annotating, and analyzing model outputs.
- Define evaluation frameworks that capture nuanced human factors (e.g., uncertainty, trust calibration, conversational quality, interpretability).
- Develop processes to track feature quality and model performance over time and flag regressions.
- Cross‑Functional Collaboration
- Communicate evaluation results clearly to data scientists, engineers, and PMs.
- Translate qualitative findings into clear loss patterns and actionable insights
- Work with product teams to ensure AI behaviors align with real‑world user expectations.
- Bachelor’s or Master’s degree in Data Science, Computer Science, Linguistics, Cognitive Science, HCI, Psychology, or a related field.
- Experience:
5+ years in data science, machine learning evaluation, ML ops, annotation quality, safety evaluation, or a similar applied role. - Technical
Skills: - Proficiency in Python for data analysis (pandas, Num Py, Jupyter, etc.).
- Experience working with large datasets, annotation tools, or model‑evaluation pipelines.
- Ability to design taxonomies, categorization schemes, or structured rating frameworks.
- Analytical Strength:
Ability to interpret unstructured data (text, transcripts, user sessions) and derive meaningful insights. - Communication:
Strong ability to stitch together qualitative and quantitative findings into actionable guidance.
- Experience working directly with LLMs, generative AI systems, or NLP models.
- Familiarity with evaluations specific to AI safety, hallucination detection, or model alignment.
- Experience…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).