Research Engineer
Listed on 2025-12-05
-
IT/Tech
Data Scientist, Systems Engineer
Applied Compute builds Specific Intelligence for enterprises, unlocking the knowledge inside a company to train custom models and deploy an in-house agent workforce.
Today’s state-of-the-art AI isn’t one-size-fits-all—it’s a tailored system that continuously learns from a company’s processes, data, expertise, and goals. The same way companies compete today by having the best human workforce, the companies building for the future will compete by having the best agent workforce supporting their human bosses. We call this Specific Intelligence and we’re already building them today.
We are a small, talent-dense team of engineers, researchers, and operators who have built some of the most influential AI systems in the world, including reinforcement learning infrastructure at OpenAI and data foundations at Scale AI, with additional experience from Together, Two Sigma, and Watershed.
We’re backed with $80M from Benchmark, Sequoia, Lux, Hanabi, Neo, Elad Gil, Victor Lazarte, Omri Casspi, and others. We work in-person in San Francisco.
The RoleAs a founding Research Engineer, you'll train frontier-scale models and adapt them into specialized experts for enterprises. You will design and run experiments at scale, developing novel methods for agentic training.
You’ll work closely with researchers to experiment with and invent new algorithms, and you’ll collaborate with infrastructure engineers to post train LLMs on thousands of GPUs. We believe that research velocity is tied with having world class tooling; you’ll build tools and observability for yourself and others, enabling deeper investigation into how models specialize during training. If you get excited by challenging systems and ML problems at scale, this role is for you.
WhatYou’ll Do
Post-train frontier scale large language models on enterprise tasks and environments
Explore cutting edge RL techniques, co-designing algorithms and systems
Partner with infrastructure engineers to scale training and inference efficiently across thousands of GPUs
Build high-performance internal tools for probing, debugging, and analyzing training runs
Experience training or serving LLMs
Experience building RL environments and evals for language models
Proficiency in PyTorch, JAX, or similar ML frameworks, and experience with distributed training
Strong experimental design skills
Background in pre- or post-training
Previous experience in high-performance computing environments or working with large-scale clusters
Contributions to open-source ML research or infrastructure
Demonstrated technical creativity through published research, OSS contributions, or side projects
Location: This role is based in San Francisco, California.
Benefits
:
Applied Compute offers generous health benefits, unlimited PTO, paid parental leave, lunches and dinners at the office, and relocation support as needed. We work in-person at a beautiful office in San Francisco’s Design District.
Visa sponsorship: We sponsor visas. While we can’t guarantee success for every candidate or role, if you’re the right fit, we’re committed to working through the visa process with you.
We encourage you to apply even if you do not believe you meet every single qualification. As set forth in Applied Compute’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).