Head of Inference Kernels
Listed on 2026-02-16
-
IT/Tech
Systems Engineer, Data Engineer
About Etched
Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history.
Job SummaryAs a core member of the team, you will play a pivotal role in leading a high-performing team to build a suite of optimized kernels and implement highly optimized inference stacks for a variety of state-of-the-art transformer models (e.g., Llama-3, Llama-4, Deepseek-R1, Qwen-3, Stable Diffusion-3 etc.). You will be responsible for managing and scaling a high-performance team to pioneer novel model mapping strategies, while co-designing inference time algorithms (e.g., speculative and parallel decoding, prefill-decode disaggregation etc.).
Key ResponsibilitiesArchitect Best-in-Class Inference Performance on Sohu: Deliver continuous batching throughput exceeding B200 by ≥10x on priority workloads
Develop Best-in-Performance Inference Mega Kernels: Develop complex, fused kernels (including basics like reordering and fusing, but also more complex work involving simultaneous computation and transmission of intermediate values for sequential matmuls) that increase chip utilization and reduce inference latency, and validate these optimizations through benchmarking and regression-tested in production pipelines.
Architect Model Mapping Strategies: Develop system level optimizations using a mix of techniques such tensor parallelism and expert parallelism for optimal performance.
Hardware-Software Co-design of Inference-time Algorithmic Innovation: Develop and deploy production-ready inference-time algorithmic improvements (e.g., speculative decoding, prefill-decode disaggregation, KV cache offloading)
Build Scalable Team and
Roadmap:
Grow and retain a team of high-performing inference optimization engineers.Cross-Functional Performance Alignment: Ensure inference stack and performance goals are aligned with the software infrastructure teams (e.g., runtime, and scheduling support), GTM (e.g., latency SLAs, workload targets) and hardware teams (e.g., instruction design, memory bandwidth) for future generations of our hardware.
Develop optimized kernels for multi-head latent attention on Sohu
Develop optimization strategies to optimally hide compute and communication in mixture-of-expert layers
Organize the team to deliver production ready forward pass implementations of new state-of-the-art models within 2-weeks of their release. Build infrastructure to be able to build this in
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).