×
Register Here to Apply for Jobs or Post Jobs. X

Research Engineering Manager - Model Training

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Pantera Capital
Full Time, Apprenticeship/Internship position
Listed on 2026-02-09
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer, Software Engineer, Data Scientist
Salary/Wage Range or Industry Benchmark: 300000 - 470000 USD Yearly USD 300000.00 470000.00 YEAR
Job Description & How to Apply Below

Location

San Francisco

Employment Type

Full time

Department

AI

Compensation
  • $300K – $470K
    • Offers Equity

U.S. Benefits

Full-time U.S. employees enjoy a comprehensive benefits program including equity, health, dental, vision, retirement, fitness, commuter and dependent care accounts, and more.

International Benefits

Full-time employees outside the U.S. enjoy a comprehensive benefits program tailored to their region of residence.

USD salary ranges apply only to U.S.

-based positions. International salaries are set based on the local market. Final offer amounts are determined by multiple factors, including experience and expertise, and may vary from the amounts listed above.

Perplexity is seeking a Research Engineering Manager to lead the team of all-star AI researchers and engineers responsible for developing the models that drive our products. Our team has developed some of the most advanced models for agentic research, query understanding, and other domains that require accuracy and depth. As we expand our userbase and portfolio of product surfaces, our in-house models are increasingly critical to providing a premium, high-taste experience for the world’s most sophisticated users.

You will dive into our rich datasets of conversational and agentic queries, leveraging cutting‑edge training techniques to scale AI model performance. Through hands-on technical and organizational leadership, you will empower your team to develop SotA models for the use cases that matter most to our business and our users.

Responsibilities
  • Lead a team of researchers and engineers focused on training SotA models for Perplexity-relevant use cases, leveraging the latest supervised and reinforcement learning techniques.

  • Drive research and engineering efforts to develop production models through advanced model training and alignment techniques, including RL, SFT, and other approaches.

  • Become deeply familiar with the team’s technical stack, leading from the front through hands-on technical contributions.

  • Own the data, training, and eval pipelines required to train and continuously improve LLM models.

  • Design and iterate on model training and fine tuning algorithms (e.g., preference‑based methods, reinforcement learning from human or AI feedback) through an approach that balances scientific rigor and iteration velocity.

  • Design evaluations and improve the production model training pipeline to reliably deliver models that lie on the Pareto frontier of speed and quality.

  • Work closely with engineering teams to integrate in-house models into our product and rapidly iterate based on real‑world usage.

  • Manage day‑to‑day execution, project planning, and prioritization for the model training team to hit ambitious quality and performance goals.

Qualifications
  • Proven experience with large-scale LLMs and Deep Learning systems.

  • Strong Python and PyTorch skills; versatility across languages and frameworks is a plus.

  • Experience leading or managing research or engineering teams working on large-scale AI model development, including driving complex projects from idea to production.

  • Self‑starter with a willingness to take ownership of tasks and navigate ambiguity in a fast‑moving environment.

  • Passion for tackling challenging problems in AI model quality, speed, safety, and reliability.

  • 10+ years of technical experience, with at least 2 of those years as a manager and at least 4 of those years working on large-scale AI model development.

Nice-to-have
  • PhD in Machine Learning or related areas.

  • Experience training very large Transformer-based models with techniques such as SFT, DPO, GRPO, RLHF‑style methods, or related preference‑based optimization approaches.

  • Prior experience designing evaluations and production training pipelines for large‑scale models in a high‑growth environment.

Compensation Range: $300K - $470K

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary