×
Register Here to Apply for Jobs or Post Jobs. X

Student Researcher Seed Vision - Multimodal Video Generation PhD

Job in San Jose, Santa Clara County, California, 95111, USA
Listing for: ByteDance
Apprenticeship/Internship position
Listed on 2026-02-17
Job specializations:
  • Business
    Data Scientist, Artificial Intelligence
Job Description & How to Apply Below
Position: Student Researcher [Seed Vision - Multimodal Video Generation] - 2026 Start (PhD)
About the team The Seed Vision Team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI. Researching and developing foundational models for visual generation (images and videos), ensuring high interactivity and cont rollability in visual generation, understanding patterns in videos, and exploring various visual-oriented tasks based on generative foundational models.

We are looking for talented individuals to join us for an internship in 2026. PhD Internships at Byte Dance aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at Byte Dance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies.

Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Conduct research on multimodal video generation, with a focus on improving semantic alignment between inputs and generated content.

- Integrate vision-language models (e.g., CLIP, pre/post-trained VLMs) into video generation architectures to enhance input understanding.

- Explore and implement joint training or fine-tuning approaches that couple VLMs with video generation backbones.

- Evaluate model performance on tasks requiring high-level reasoning or detailed semantic control over generation.

- Collaborate with researchers and engineers to iterate on prototypes within an existing infrastructure.

Minimum Qualifications:

- Currently pursuing a PhD in Computer Vision, Machine Learning, or a related field.

- Research experience in one or more of the following areas:

Vision-language models (VLMs);
Multimodal or joint model training;
Video generation - Solid coding ability and clean research implementation style, and expected to work with a production-grade codebase (e.g., PyTorch).

- Demonstrated research ability, with first-author publications in top-tier ML/CV/AI conferences such as CVPR, ICCV, ECCV, and ICLR

Preferred Qualifications:

- Experience in training or fine-tuning autoregressive or diffusion-based video generation models.

- Background in multimodal instruction-following, alignment, or conditioning for generation tasks.

- Understanding of evaluation techniques for assessing semantic consistency in generated video.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary