×
Register Here to Apply for Jobs or Post Jobs. X

Member of Technical Staff, Kernel Engineering

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Inferact
Full Time position
Listed on 2026-01-25
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer
  • Engineering
    AI Engineer
Salary/Wage Range or Industry Benchmark: 200000 - 400000 USD Yearly USD 200000.00 400000.00 YEAR
Job Description & How to Apply Below

Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build.

About the Role

We're looking for a performance engineer to squeeze every FLOP out of modern accelerators. You'll write the kernels and low-level optimizations that make vLLM the fastest inference engine in the world. Your code will run on hundreds of accelerator types, from NVIDIA GPUs to emerging silicon. When hardware vendors develop new chips, they integrate with vLLM. You'll work directly with these teams to ensure we're extracting maximum performance from every generation of hardware.

Skills and Qualifications

Minimum qualifications:

  • Bachelor's degree or equivalent experience in computer science, engineering, or similar.

  • Deep experience writing CUDA kernels or equivalent (CuTeDSL, Triton, Tile Lang, Pallas).

  • Strong understanding of GPU architecture: memory hierarchy, warp scheduling, tiling, tensor cores.

  • Proficiency in C++ and Python with demonstrated ability to write high-performance code.

  • Experience with profiling tools (Nsight, rocprof) and performance optimization methodologies.

  • Obsession with benchmarks and squeezing every percentage point of speedup.

Preferred qualifications:

  • Experience with ML-specific kernel optimization (Flash Attention, fused kernels).

  • Knowledge of quantization techniques (INT8, FP8, mixed-precision).

  • Familiarity with multiple accelerator platforms (NVIDIA, AMD, TPU, Intel).

  • Experience with compiler technologies (LLVM, MLIR, XLA).

Bonus points if you have:

  • Kernel-related contributions to vLLM or other inference engine projects.

  • Contributions to open-source GPU, ML systems, or compiler optimization projects

  • Written deep technical blogs on GPU optimization.

Logistics
  • Location: This role is based in San Francisco, California. Will consider remote in the US for exceptional candidates.

  • Compensation: Depending on background, skills, and experience, the expected annual salary range for this position is $200,000 - $400,000 USD + equity.

  • Visa sponsorship: We sponsor visas on a case-by-case basis.

  • Benefits
    :
    Inferact offers generous health, dental, and vision benefits as well as 401(k) company match.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary