×
Register Here to Apply for Jobs or Post Jobs. X

Principal Software Engineer Scale LLM Memory and Storage Systems

Job in Vancouver, Clark County, Washington, 98662, USA
Listing for: NVIDIA
Full Time position
Listed on 2026-02-12
Job specializations:
  • Software Development
    Software Engineer, AI Engineer
Salary/Wage Range or Industry Benchmark: 272000 - 425500 USD Yearly USD 272000.00 425500.00 YEAR
Job Description & How to Apply Below
Position: Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

We’re looking for a Principal Systems Engineer to define the vision and roadmap for memory management of large‑scale LLM and storage systems. Join NVIDIA and help build the high‑throughput, low‑latency inference framework Dynamo, designed for serving generative AI and reasoning models in multi‑node distributed environments.

About the Platform

NVIDIA Dynamo is built in Rust for performance and Python for extensibility. It orchestrates GPU shards, routes requests, and manages shared KV cache across heterogeneous clusters so that many accelerators feel like a single system at datacenter scale.

Responsibilities
  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA‑accessible memory, SSD tiers, and remote file/object/cloud storage to support large‑scale LLM inference.
  • Architect and implement deep integrations with leading LLM serving engines (e.g., vLLM, SGLang, Tensor

    RT‑LLM), focusing on KV‑cache offload, reuse, and remote sharing across disaggregated clusters.
  • Co‑design interfaces and protocols that enable disaggregated prefill, peer‑to‑peer KV‑cache sharing, and multi‑tier KV‑cache storage (GPU, CPU, local disk, and remote memory) for high‑throughput, low‑latency inference.
  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low‑latency KV‑cache access and sharing across heterogeneous accelerators and memory pools.
  • Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer‑facing technical deep dives).
Qualifications
  • Master’s or PhD degree or equivalent experience.
  • 15+ years of experience building large‑scale distributed systems, high‑performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services.
  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency.
  • Experience with distributed caching or key‑value systems, especially designs optimized for low latency and high concurrency.
  • Hands‑on experience with networked I/O and RDMA/NVMe‑oF/NVLink‑style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters.
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput.
  • Excellent communication skills and prior experience leading cross‑functional efforts with research, product, and customer teams.
Ways to Stand Out
  • Prior contributions to open‑source LLM serving or systems projects focused on KV‑cache optimization, compression, streaming, or reuse.
  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers, especially in enterprise or hyperscale environments.
  • Publications or patents in areas such as LLM systems, memory‑disaggregated architectures, RDMA/NVLink‑based data planes, or KV‑cache/CDN‑like systems for ML.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is $272,000 – $425,500. You will also be eligible for equity and benefits. Applications will be accepted until December 26, 2025.

We are committed to fostering a diverse work environment and are proud to be an equal opportunity employer. We do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary