Research Scientist/Engineer – Multimodal Capabilities
Listed on 2025-12-05
-
IT/Tech
Artificial Intelligence
Location: Iowa
Research Scientist / Engineer – Multimodal Capabilities
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable, and useful systems, the next step function change will come from vision. We are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
WhereYou Come In
This is a high-impact opportunity to define the future of what our models can do. As a first-principles researcher, you will tackle the most ambitious questions at the heart of our mission: how can the fusion of vision, audio, and language unlock entirely new, magical behaviors in AI? You will not just be improving existing systems; you will be charting the course for the next generation of model capabilities, designing the core experiments that will shape the future of our technology and products.
WhatYou'll Do
- Research and define the next frontier of multimodal capabilities, identifying key gaps in our current models and designing experiments to solve them.
- Design and execute novel experiments, datasets, and methodologies to systematically improve model performance across vision, audio, and language.
- Develop and pioneer new evaluation frameworks and benchmarking approaches to precisely measure novel multimodal behaviors and capabilities.
- Collaborate deeply with other research teams to translate your findings into our core training recipes and unlock new product experiences.
- Build and prototype compelling demonstrations that showcase the groundbreaking multimodal capabilities you have unlocked.
- You have a PhD or equivalent research experience in a field related to AI, Machine Learning, or Computer Science.
- You have strong programming skills in Python and deep, hands‑on experience with PyTorch.
- You have a proven track record of working with multimodal data pipelines and curating large-scale datasets for research.
- You possess a deep, fundamental understanding of at least one of the core modalities: computer vision, audio processing, or natural language processing.
- You thrive on tackling the most ambitious, open-ended research challenges in a fast‑paced, collaborative environment.
- Direct expertise working with complex, interleaved multimodal data (video, audio, text).
- Hands‑on experience training or fine‑tuning Vision Language Models (VLMs), Audio Language Models, or large‑scale generative video models from scratch.
- A strong publication record in top‑tier AI conferences (e.g., NeurIPS, ICML, CVPR, ICLR).
- Experience leading ambitious, open-ended research projects from ideation to tangible results.
Mid‑Senior level
Employment typeFull‑time
Job functionOther
IndustriesSoftware Development
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).