Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output
Listed on 2026-01-07
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
Machine Learning Engineer, vLLM Inference – Tool Calling and Structured Output
At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading contributors and maintainers of the vLLM and LLM‑D projects and inventors of state‑of‑the‑art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.
In this role, you will build and maintain subsystems that allow vLLM to speak the language of tools. You will bridge the gap between probabilistic token generation and deterministic schema compliance, working directly on tool parsers to interpret raw model outputs and structured output engines to guide generation at the logit level.
What You Will Do
- Write robust Python and Pydantic code, working on vLLM systems, high‑performance machine learning primitives, performance analysis and modeling, and numerical methods
- Contribute to the design, development, and testing of function calling, tool‑calling parser, and structured output subsystems in vLLM
- Participate in technical design discussions and provide innovative solutions to complex problems
- Give thoughtful and prompt code reviews
- Mentor and guide other engineers and foster a culture of continuous learning and innovation
What You Will Bring
- Strong experience in Python and Pydantic
- Strong understanding of LLM inference core concepts such as logits processing (Logit Generation → Sampling → Decoding loop)
- Deep familiarity with the OpenAI Chat Completions API specification
- Deep familiarity with libraries like Outlines, XGrammar, Guidance, or Llama.cpp grammars
- Proficiency with efficient parsing techniques (e.g., incremental parsing) is a strong plus
- Proficiency with Jinja2 chat templates
- Familiarity with beam search and greedy decoding in the context of constraints
- Familiarity with LLM inference metrics and trade‑offs
- Experience with tensor math libraries such as PyTorch is a strong plus
- Strong communication skills with both technical and non‑technical team members
- BS, or MS in computer science or computer engineering, mathematics, or a related field;
PhD in an ML‑related domain is considered a plus
Salary range: $ – $ (Actual offer will be based on qualifications)
Location:
Raleigh, NC
Benefits
- Comprehensive medical, dental, and vision coverage
- Flexible Spending Account – healthcare and dependent care
- Health Savings Account – high deductible medical plan
- Retirement 401(k) with employer match
- Paid time off and holidays
- Paid parental leave plans for all new parents
- Leave benefits including disability, paid family medical leave, and paid military leave
- Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more
Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email appl
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).