Research Scientist/Engineer - Training Infrastructure
Job in
Palo Alto, Santa Clara County, California, 94306, USA
Listed on 2026-02-17
Listing for:
Luma AI, Inc.
Apprenticeship/Internship
position Listed on 2026-02-17
Job specializations:
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer, Systems Engineer, Cloud Computing
Job Description & How to Apply Below
About Luma AI
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
About the Role
The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what's possible in AI model development. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems.
You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.
Responsibilities
* Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
* Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
* Build monitoring, visualization, and debugging tools for large-scale training runs
* Optimize training stability, convergence, and resource utilization across massive clusters
Experience
* Extensive experience with distributed PyTorch training and parallelisms in foundation model training
* Deep understanding of GPU clusters, networking, and storage systems
* Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
* (Preferred) Strong Linux systems administration and scripting capabilities
* (Preferred) Experience managing training runs across >100 GPUs
* (Preferred)
Experience with containerization, orchestration, and cloud infrastructure
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×