×
Register Here to Apply for Jobs or Post Jobs. X

Member of Technical Staff - Training

Job in Palo Alto, Santa Clara County, California, 94306, USA
Listing for: RadixArk
Apprenticeship/Internship position
Listed on 2026-03-04
Job specializations:
  • IT/Tech
    Systems Engineer, AI Engineer
Job Description & How to Apply Below
About the Role

Radix Ark is seeking a Member of Technical Staff - Training to build and scale the systems that train frontier AI models.

You will work on large-scale distributed training infrastructure for LLMs and generative models, pushing the limits of scale, efficiency, and reliability across thousands of GPUs. This role sits at the intersection of ML, systems, and performance engineering.

Your work will directly impact how next-generation AI models are trained and scaled.

This is a deeply technical, high-impact role for engineers who enjoy solving hard systems problems at extreme scale.
Requirements
  • 5+ years of experience in ML systems, distributed systems, or large-scale training infrastructure
  • Strong experience with large-scale distributed training (data, tensor, and pipeline parallelism)
  • Deep understanding of GPU/TPU architecture and performance trade-offs
  • Strong knowledge of PyTorch or JAX distributed training stacks
  • Experience debugging performance and stability issues in large training jobs
  • Solid distributed systems fundamentals (networking, consensus, fault tolerance)
  • Proficiency in Python plus a systems language (C++, Go, or Rust)
  • Experience operating production ML systems at scale
Strong Plus
  • Experience training multi-billion-parameter models
  • Familiarity with Deep Speed, Megatron-LM, FSDP, or custom training stacks
  • Experience with RDMA, Infini Band, or high-speed interconnects
  • Background in HPC or performance-critical computing
  • Contributions to ML systems open-source projects
  • Experience with checkpointing, fault recovery, and elastic training
  • Experience optimizing training cost efficiency at scale
Responsibilities
  • Design and operate large-scale distributed training systems
  • Optimize throughput, scalability, and hardware efficiency
  • Improve reliability and fault tolerance for long-running training jobs
  • Develop training frameworks and infrastructure tooling
  • Collaborate with model researchers to support frontier experiments
  • Debug and resolve cross-layer performance bottlenecks
  • Build observability systems for training performance and reliability
  • Drive capacity planning and cluster utilization strategies
  • Contribute to long-term training infrastructure architecture
About Radix Ark

Radix Ark is an infrastructure-first AI company built by engineers who have shipped production AI systems, created SGLang (20K+ Git Hub stars, the fastest open LLM serving engine), and developed Miles, our large-scale RL framework.

We build world-class infrastructure for AI training and inference and partner with frontier AI teams and cloud providers.

Our team has coordinated training across 10,000+ GPUs and optimized kernels serving billions of tokens daily.

Join us in building the infrastructure that trains the next generation of AI.
Compensation

We offer competitive compensation with meaningful equity, comprehensive benefits, and flexible work arrangements. Compensation depends on location, experience, and level.
Equal Opportunity

Radix Ark is an Equal Opportunity Employer and welcomes candidates from all backgrounds.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary