Forward Deployed Engineer, AI Inference; vLLM and Kubernetes
Listed on 2026-02-28
-
IT/Tech
Systems Engineer, Data Engineer, AI Engineer
The vLLM and LLM-D Engineering team at Red Hat is looking for a customer‑obsessed developer to join our team as a Forward Deployed Engineer. In this role, you will not just build software; you will be the bridge between our cutting‑edge inference platform (LLM-D and vLLM) and our customers’ most critical production environments.
You will interface directly with the engineering teams at our customers to deploy, optimise, and scale distributed Large Language Model (LLM) inference systems. You will solve “last‑mile” infrastructure challenges that defy off‑the‑shelf solutions, ensuring that massive models run with low latency and high throughput on complex Kubernetes clusters. This is not a sales engineering role; you will be part of the core vLLM and LLM-D engineering team.
WhatYou Will Do
- Orchestrate Distributed Inference:
Deploy and configure LLM-D and vLLM on Kubernetes clusters. You will set up and configure advanced deployments such as disaggregated serving, KV‑cache aware routing, and KV‑cache offloading to maximise hardware utilisation. - Optimise for Production:
Run performance benchmarks, tune vLLM parameters, and configure intelligent inference routing policies to meet SLOs for latency and throughput. You care about Time Per Output Token (TPOT), GPU utilisation, GPU networking optimisations, and Kubernetes scheduler efficiency. - Code Side‑by‑Side:
Work directly with customer engineers to write production‑quality code (Python/Go/YAML) that integrates our inference engine into their existing Kubernetes ecosystem. - Solve the “Unsolvable”:
Debug complex interaction effects between specific model architectures (e.g., MoE, large context windows), hardware accelerators (NVIDIA GPUs, AMD GPUs, TPUs), and Kubernetes networking (Envoy/ISTIO). - Feedback Loop:
Act as the “Customer Zero” for our core engineering teams and channel field learnings back to product development, influencing the roadmap for LLM‑D and vLLM features. - Travel only as needed to customers to present, demo, or help execute proof‑of‑concepts.
- 8+ Years of Engineering
Experience:
You have a decade‑long track record in Backend Systems, SRE, or Infrastructure Engineering. - Customer Fluency:
You speak both “Systems Engineering” and “Business Value”. - Bias for Action:
You prefer rapid prototyping and iteration over theoretical perfection. You are comfortable operating in ambiguity and taking ownership of the outcome. - Deep Kubernetes Expertise:
You are fluent in K8s primitives, from defining custom resources (CRDs, Operators, Controllers) to configuring modern ingress via the Gateway API. You have deep experience with stateful workloads and high‑performance networking, including the ability to tune scheduler logic (affinity/tole rations) for GPU workloads and troubleshoot complex CNI failures. - AI Inference Proficiency:
You understand how an LLM forward pass works. You know what KV Caching is, why prefill/decode disaggregation matters, why context length impacts performance, and how continuous batching works in vLLM. - Systems Programming:
Proficiency in Python (for model interfaces) and Go (for Kubernetes controllers/scheduler logic). - Infrastructure as Code:
Experience with Helm, Terraform, or similar tools for reproducible deployments. - Cloud & GPU Hardware Fluency:
You are comfortable spinning up clusters and deploying LLMs on bare‑metal and hyperscaler Kubernetes clusters.
- Experience contributing to open‑source AI infrastructure projects (e.g., KServe, vLLM, Kubernetes).
- Knowledge of Envoy Proxy or Inference Gateway (IGW).
- Familiarity with model optimisation techniques such as Quantisation (AWQ, GPTQ) and Speculative Decoding.
#AI‑HIRING
The salary range for this position is $ – $. Actual offer will be based on your qualifications.
Pay TransparencyRed Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. Annual salary is one component of Red Hat’s compensation package. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote‑US locations,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).