×
Register Here to Apply for Jobs or Post Jobs. X

Agentic AI​/ML Engineer - Multimodal

Job in Irvine, Orange County, California, 92713, USA
Listing for: Medium
Full Time position
Listed on 2025-10-30
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 70000 - 200000 USD Yearly USD 70000.00 200000.00 YEAR
Job Description & How to Apply Below
Position: 1.68 Agentic AI/ML Engineer - Multimodal

Who are We?

Field AI is transforming how robots interact with the real world. We are building risk‑aware, reliable, and field‑ready AI systems that address the most complex challenges in robotics, unlocking the full potential of embodied intelligence. We go beyond typical data‑driven approaches or pure transformer‑based architectures, and are charting a new course, with already globally‑deployed solutions delivering real‑world results and rapidly improving models through real‑field applications.

Learn more at

About the Job

Our Field Foundation Model (FFM) powers a global fleet of autonomous robots that capture massive streams of multimodal data across diverse, dynamic environments every day. As part of the Insight Team our mission is to transform this raw, multimodal data into actionable insights that empower our customers and engineers to deliver value. Field‑insight Foundation Model (FiFM) is at the core of how we transform multimodal data from autonomous robots into actionable insights.

As an AI/ML Engineer on the FiFM team, you will drive research and model development for one of Field AI’s most ambitious initiatives. Your work will span computer vision, vision‑language models (VLMs), multimodal scene understanding, and long‑memory video analysis and search, with a strong emphasis on agentic AI (tool use, memory, multimodal retrieval‑augmented generation). This is a full‑cycle ML role: you’ll curate datasets, fine‑tune and evaluate models, optimize inference, and deploy them into production.

It’s a blend of applied research and engineering, requiring creativity, rapid experimentation, and rigorous problem‑solving. While FiFM is your primary focus, you’ll also contribute to broader perception and insight‑generation initiatives across Field AI.

What You’ll Get To Do:
  • Train and fine‑tune million‑to‑billion‑parameter multimodal models, with a focus on computer vision, video understanding, and vision‑language integration.
  • Track state‑of‑the‑art research
    , adapt novel algorithms, and integrate them into FiFM.
  • Curate datasets and develop tools to improve model interpretability.
  • Build scalable evaluation pipelines for vision and multimodal models.
  • Contribute to model observability, drift detection, and error classification.
  • Fine‑tune and optimize open‑source VLMs and multimodal embedding models for efficiency and robustness.
  • Build and optimize Multi‑Vector

    RAG pipelines
    with vector DBs and knowledge graphs.
  • Create embedding‑based memory and retrieval chains with token‑efficient chunking strategies.
What You Have:
  • Master’s/Ph.D. in Computer Science, AI/ML, Robotics, or equivalent industry experience.
  • 2+ years of industry experience or relevant publications in CV/ML/AI.
  • Strong expertise in computer vision, video understanding, temporal modeling, and VLMs.
  • Proficiency in Python and PyTorch with production‑level coding skills.
  • Experience building pipelines for large‑scale video/image datasets.
  • Familiarity with AWS or other cloud platforms for ML training and deployment.
  • Understanding of MLOps best practices (CI/CD, experiment tracking).
  • Hands‑on experience fine‑tuning open‑source multimodal models using Hugging Face, Deep Speed, vLLM, FSDP, LoRA/QLoRA.
  • Knowledge of precision tradeoffs (FP16, bfloat
    16, quantization) and multi‑GPU optimization.
  • Ability to design scalable evaluation pipelines for vision/VLMs and agent performance.
The Extras That Set You Apart:
  • Experience with agentic/RAG pipelines and knowledge graphs (Lang Chain, Lang Graph, Llama Index, Open Search, FAISS, Pinecone).
  • Familiarity with agent operations logging and evaluation frameworks.
  • Background in optimization: token cost reduction, chunking strategies, reranking, and retrieval latency tuning.
  • Experience deploying models under quantized (int4/int8) and distributed multi‑GPU inference.
  • Exposure to open‑vocabulary detection, zero/few‑shot learning, multimodal RAG.
  • Knowledge of temporal‑spatial modeling (event/scene graphs).
  • Experience deploying AI in edge or resource‑constrained environments.
Compensation and Benefits

Our salary range is generous ($70,000 – $200,000) annual, but we take into consideration an individual’s background and experience in determining…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary