Senior Principal Machine Learning Engineer, Distributed vLLM and Kubernetes
Job in
Boston, Suffolk County, Massachusetts, 02298, USA
Listed on 2026-02-08
Listing for:
Red Hat, Inc.
Full Time
position Listed on 2026-02-08
Job specializations:
-
Software Development
Cloud Engineer - Software, Software Engineer, DevOps, AI Engineer
Job Description & How to Apply Below
Hybrid locations:
Boston time type:
Full time posted on:
Posted Todayjob requisition :
R-050634##
Job Summary At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat AI Inference Engineering team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.
As a Machine Learning Engineer focused on distributed vLLM infrastructure in the llm-d project, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in scalable inference systems and Kubernetes-native deployments. Your work with machine learning, distributed systems, high performance computing, and cloud infrastructure will directly impact the development of our cutting-edge software platform, helping to shape the future of AI deployment and utilization.
If you want to solve cutting edge problems at the intersection of deep learning, distributed systems, and cloud-native infrastructure the open-source way, this is the role for you.
Join us in shaping the future of AI!## What you will do
* Architect and lead implementation of new features and solutions for Red Hat AI Inference
* Lead and foster a healthy upstream open source community
* Design, develop, and maintain distributed inference infrastructure leveraging Kubernetes APIs, operators, and the Gateway Inference Extension API for scalable LLM deployments.
* Design, develop, and maintain system components in Go and/or Rust to integrate with the vLLM project and manage distributed inference workloads.
* Design, develop, and maintain KV cache-aware routing and scoring algorithms to optimize memory utilization and request distribution in large-scale inference deployments.
* Enhance the resource utilization, fault tolerance, and stability of the inference stack.
* Design, develop, and test various inference optimization algorithms.
* Actively lead and facilitate technical design discussions and propose innovative solutions to complex challenges for high impact projects
* Contribute to a culture of continuous improvement by sharing recommendations and technical knowledge with team members
* Collaborate with product management, other engineering and cross-functional teams to analyze and clarify business requirements
* Communicate effectively to stakeholders and team members to ensure proper visibility of development efforts
* Mentor, influence, and coach a distributed team of engineers
* Provide timely and constructive code reviews
* Represent RHAI in external engagements including industry events, customer meetings, and open source communities## What you will bring
* Strong proficiency in Python, GoLang and at least one of the following:
Rust, C, or C++.
* Strong experience with cloud-native Kubernetes service mesh technologies/stacks such as Istio, Cilium, Envoy (WASM filters), and CNI.
* A solid understanding of Layer 7 networking, HTTP/2, gRPC, and the fundamentals of API gateways and reverse proxies.
* Knowledge of serving runtime technologies for hosting LLMs, such as vLLM, SGLang, Tensor
RT-LLM, etc.
* Excellent written and verbal communication skills, capable of interacting effectively with both technical and non-technical team members.
* Experience providing technical leadership in a global team and delivering on a vision
* Autonomous work ethic and the ability to thrive in a dynamic, fast-paced environment## Following is considered a plus
* Knowledge of high-performance networking protocols and technologies including UCX, RoCE, Infini Band, and RDMA is a plus.
* Deep experience with the Kubernetes ecosystem, including core concepts, custom APIs, operators, and the Gateway API inference extension for GenAI…
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×