×
Register Here to Apply for Jobs or Post Jobs. X

Principal Software Engineer - Dynamo

Job in Santa Clara, Santa Clara County, California, 95053, USA
Listing for: NVIDIA Corporation
Full Time position
Listed on 2026-01-02
Job specializations:
  • Software Development
    Software Engineer, Cloud Engineer - Software
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below
Principal Software Engineer - Dynamo page is loaded## Principal Software Engineer - Dynamo locations:
US, CA, Santa Clara:
US, CA, Remote time type:
Full time posted on:
Posted Todayjob requisition :
JR2010290

NVIDIA Dynamo is an innovative, open-source platform focused on efficient, scalable inference for large language and reasoning models in distributed GPU environments. By bringing to bear sophisticated techniques in serving architecture, GPU resource management, and intelligent request handling, Dynamo achieves high-performance AI inference for demanding applications. Our team is addressing the most challenging issues in distributed AI infrastructure, and we’re searching for engineers enthusiastic about building the next generation of scalable AI systems.

As a Principal Software Engineer on the Dynamo project, you will address some of the most sophisticated and high-impact challenges in distributed inference, including:
* Dynamo k8s Serving Platform:
Build the Kubernetes deployment and workload management stack for Dynamo to facilitate inference deployments ntify bottlenecks and apply optimization techniques to fully use hardware capacity.
* Scalability & Reliability:
Develop robust, production-grade inference workload management systems that scale from a handful to thousands of GPUs, supporting a variety of LLM frameworks (e.g., Tensor

RT-LLM, vLLM, SGLang).
* Disaggregated Serving:
Architect and optimize the separation of prefill (context ingestion) and decode (token generation) phases across distinct GPU clusters to improve throughput and resource utilization. Contribute to embedding disaggregation for multi-modal models (Vision-Language models, Audio Language Models, Video Language Models).
* Dynamic GPU Scheduling:
Develop and refine Planner algorithms for real-time allocation and rebalancing of GPU resources based on fluctuating workloads and system bottlenecks, ensuring peak performance at scale.
* Intelligent Routing:
Enhance the smart routing system to efficiently direct inference requests to GPU worker replicas with relevant KV cache data, minimizing re-computation and latency for sophisticated, multi-step reasoning tasks.
* Distributed KV Cache Management:
Innovate in the management and transfer of large KV caches across heterogeneous memory and storage hierarchies, using the NVIDIA Optimized Transfer Library (NIXL) for low-latency, cost-effective data movement.
** What you'll be doing:
*** Collaborate on the design and development of the Dynamo Kubernetes stack.
* Introduce new features to the Dynamo Python SDK and Dynamo Rust Runtime Core Library.
* Design, implement, and optimize distributed inference components in Rust and Python.
* Contribute to the development of disaggregated serving for Dynamo-supported inference engines (vLLM, SGLang, TRT-LLM, llama.cpp, mistral.rs).
* Improve intelligent routing and KV-cache management subsystems.
* Contribute to open-source repositories, participate in code reviews, and assist with issue triage on Git Hub.
* Work closely with the community to address issues, capture feedback, and evolve the framework’s APIs and architecture.
* Write clear documentation and contribute to user and developer guides.
** What we need to see:
*** BS/MS or higher in computer engineering, computer science or related engineering (or equivalent experience).
* 15+ years of proven experience in related field.
* Strong proficiency in systems programming (Rust and/or C++), with experience in Python for workflow and API development.

Experience with Go for Kubernetes controllers and operators development.
* Deep understanding of distributed systems, parallel computing, and GPU architectures.
* Experience with cloud-native deployment and container orchestration (Kubernetes, Docker).
* Experience with large-scale inference serving, LLMs, or similar high-performance AI workloads.
* Background with memory management, data transfer optimization, and multi-node orchestration.
* Familiarity with open-source development workflows (Git Hub, continuous integration and continuous deployment).
* Excellent problem-solving and communication skills.
** Ways to stand out from the crowd:
*** Prior contributions to…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary