×
Register Here to Apply for Jobs or Post Jobs. X

3D Sparse Diffusion Specialist San Francisco

Job in Mission, Johnson County, Kansas, 66201, USA
Listing for: Worldlabs
Full Time position
Listed on 2026-02-16
Job specializations:
  • Research/Development
    Data Scientist, Artificial Intelligence
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: 3D Sparse Diffusion Specialist New San Francisco

At World Labs, we’re building Large World Models—AI systems that understand, reason about, and interact with the physical world. Our work sits at the frontier of spatial intelligence, robotics, and multimodal AI, with the goal of enabling machines to perceive and operate in complex real‑world environments.

We’re assembling a global team of researchers, engineers, and builders to push beyond today’s limitations in artificial intelligence. If you’re excited to work on foundational technology that will redefine how machines understand the world—and how people interact with AI—this role is for you.

About World Labs:

World Labs is an AI research and development company focused on creating spatially intelligent systems that can model, reason, and act in the real world. We believe the next generation of AI will not live only in text or pixels, but in three‑dimensional, dynamic environments
—and we are building the core models to make that possible.

Our team brings together expertise across machine learning, robotics, computer vision, simulation, and systems engineering. We operate with the urgency of a startup and the ambition of a research lab, tackling long‑horizon problems that demand creativity, rigor, and resilience.

Everything we do is in service of building the most capable world models possible—and using them to empower people, industries, and society.

Role Overview

We’re looking for a Research Scientist focused on 3D & Sparse Diffusion to develop next‑generation generative models that operate natively in 3D or over sparse, structured representations. This role is for someone excited about pushing the frontier of diffusion‑based generative modeling beyond dense grids—into point clouds, implicit representations, multi‑view observations, and hybrid 2D/3D formulations.

This is a research‑forward, hands‑on role at the intersection of generative modeling, 3D representations, and scalable learning systems. You’ll work closely with other research scientists and engineers to invent, evaluate, and deploy diffusion models that power high‑fidelity 3D generation, reconstruction, and editing in real‑world product settings.

What You Will Do:
  • Research and develop 3D‑native and sparse diffusion models for generating and refining geometry, appearance, and scene structure.
  • Design diffusion processes over sparse or structured domains (e.g., point clouds, implicit fields, multi‑view features, hybrid representations) with an emphasis on efficiency and fidelity.
  • Explore novel noise schedules, conditioning strategies, and sampling algorithms tailored to 3D and sparse data.
  • Build end‑to‑end training pipelines for large‑scale diffusion models, including data preparation, supervision strategies, and evaluation metrics.
  • Collaborate with 3D reconstruction and modeling teams to integrate diffusion‑based components into broader systems for generation, reconstruction, and editing.
  • Analyze model behavior and failure modes specific to sparse and 3D settings, and propose principled improvements to robustness and cont rollability.
  • Optimize training and inference performance, balancing sample quality, compute efficiency, and scalability.
  • Contribute to the team’s research output through publications, technical reports, and internal knowledge sharing.
  • Stay current with—and help shape—emerging research directions in generative modeling, diffusion, and 3D learning.
Key

Qualifications:

  • 5+ years of experience in generative modeling, 3D learning, or related areas within machine learning research.
  • Hands‑on experience designing or training diffusion models, with demonstrated work on 3D‑native, sparse, or structured representations.
  • Strong background in modern 3D representations (e.g., point‑based, implicit, volumetric, or hybrid) and their interaction with learning‑based models.
  • Proficiency in Python and deep learning frameworks (e.g., PyTorch), with experience building research‑grade training and evaluation code.
  • Solid understanding of probabilistic modeling, optimization, and large‑scale training dynamics.
  • Experience publishing at top‑tier venues or contributing to influential research or open‑source projects in generative modeling or 3D.
  • Ability to operate…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary