Senior GenAI Algorithms Engineer — Post-Training Optimizations
Listed on 2026-01-04
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
Senior GenAI Algorithms Engineer — Post-Training Optimizations
NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency. Techniques include quantization, speculative decoding, sparsity, knowledge distillation, pruning, neural architecture search, and streamlined deployment strategies with open‑source inference frameworks. We are seeking a Senior Deep Learning Algorithms Engineer to improve innovative LLMs, VLMs, and multimodality models.
In this role you will design, implement, and product ionize model optimization algorithms for inference and deployment on NVIDIA’s latest hardware platforms, emphasizing ease of use, compute and memory efficiency, and optimal accuracy–performance trade‑offs through software‑hardware co‑design.
- Design and build modular, scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models and optimization techniques to drive widespread adoption.
- Explore, develop, and integrate innovative deep learning optimization algorithms (e.g., quantization, speculative decoding, sparsity) into NVIDIA's AI software stack, e.g., Tensor
RT Model Optimizer, NeMo/Megatron, and Tensor
RT‑LLM. - Construct and curate large, problem‑specific datasets for post‑training, fine tuning, and reinforcement learning.
- Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model‑level optimizations, and new features tailored to the latest NVIDIA hardware capabilities.
- Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end‑to‑end workflows and balanced accuracy‑performance trade‑offs.
- Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem.
- Master’s, Ph.D., or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field.
- 5+ years of relevant work or research experience in deep learning.
- Strong software design skills, including debugging, performance analysis, and test development.
- Proficiency in Python, PyTorch, and modern ML frameworks/tools.
- Proven foundation in algorithms and programming fundamentals.
- Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast‑paced environment.
- Contributions to PyTorch, Megatron‑LM, NeMo, Tensor
RT‑LLM, vLLM, SGLang, or other machine learning training and inference frameworks. - Hands‑on training, fine‑tuning, or reinforcement learning experience on LLM or VLM models with large‑scale GPU clusters.
- Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end‑to‑end performance.
- Familiarity with NVIDIA’s deep learning SDKs (e.g., NeMo, Tensor
RT, Tensor
RT‑LLM).
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is $184,000 USD – $287,500 USD. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until September 20, 2025. NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. NVIDIA does not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.
Job
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).