Research Scientist, Computer Vision; Learning From Videos
Listed on 2025-12-06
-
Research/Development
Robotics, Artificial Intelligence
At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we’ve built a world-class team in Automated Driving, Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavior Models, and Robotics.
The MissionMake general-purpose robots a reality.
The ChallengeWe envision a future where robots assist with household chores and cooking, aid the elderly in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots must be able to operate reliably in complex, unstructured environments. Our mission is to answer the question “What will it take to create truly general-purpose robots that can accomplish a wide variety of tasks in settings like human homes with minimal human supervision?”
We believe that the answer lies in cultivating large-scale datasets of physical interaction from a variety of sources and building on the latest advances in machine learning to learn general purpose robot behaviors from this data.
The Learning From Videos (LFV) team in the Robotics division focuses on the development of foundation models capable of leveraging large-scale multi-modal data from multiple domains (driving, robotics, indoors, outdoors, etc.) to improve the performance of downstream tasks. This paradigm targets training scalability, since data from multiple modalities can be equally leveraged to learn useful data-driven priors (3D geometry, physics, dynamics, etc) for world understanding.
Our topics of interest include, but are not limited to, Video Generation, World Models, 4D Reconstruction, Multi-Modal Models, Multi-View Geometry, Data Augmentation, and Video-Language-Action models, with a primary focus on foundation models for embodied applications. We are aiming to make progress on some of the hardest scientific challenges around spatio-temporal reasoning, and how it can lead to the deployment of autonomous agents in real-world unstructured environments.
Opportunity
Our Learning From Videos (LFV) team is looking for a Computer Vision Research Scientist with expertise in Video Generation, Spatio-temporal Representation Learning, World Models, Foundation Models, Multi-Modal Learning, Vision-as-Inverse-Graphics (including Differentiable Rendering), or related fields, to improve dynamic scene understanding for robots. We are working on some of the hardest scientific challenges around the safe and effective usage of large robotic fleets, simulation, and prior knowledge (geometry, physics, domain knowledge, behavioral science), not only for automation but also for human augmentation.
As a Research Scientist, you will work with a team proposing, conducting, and transferring innovative research. You will use large amounts of sensory data (real and synthetic) to address open problems, train models at scale, publish at top academic venues, and test your ideas in the real world (including on our robots). You will also work closely with other teams at TRI to transfer and ship our most successful algorithms and models towards world-scale long-term autonomy and advanced assistance systems.
Responsibilities- Conduct high-reaching research that solves problems of high value and validates them in well established benchmarks and systems.
- Push the boundaries of knowledge and the state of the art in ML areas, including simulation, perception, prediction, and planning for autonomous driving and robotics.
- Partner with a multidisciplinary team including other research scientists and engineers across the CV team, TRI, Toyota, and our university partners.
- Present results in verbal and written communications, internally, at top international venues, and via open-source contributions to the community.
- Work closely with robotics and machine learning researchers and engineers to understand theoretical and practical needs.
- Lead collaborations with our external research partners and mentor research interns.
- Follow best practices producing maintainable code, both for internal use as well as…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).