Principal GPU Performance Engineer - Artificial Intelligence
Listed on 2026-02-16
-
IT/Tech
AI Engineer, Systems Engineer
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next‑generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture.
We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.
Together, we advance your career.
THE ROLE:We are seeking a GPU Performance Engineer to optimize AI training workloads and guide the evolution of next‑generation AMD Instinct GPU architectures. In this role, you will work across the software and hardware stack to profile, analyze, and improve the efficiency of training foundation models on large‑scale GPU clusters. You will collaborate with framework developers, distributed systems engineers, and hardware architects to ensure our GPUs deliver industry‑leading training performance today while shaping the designs of tomorrow.
THEPERSON:
We value curiosity and innovation, and we’re committed to providing a challenging and supportive environment where you can learn and grow. As you collaborate with your peers, you’ll have the opportunity to make a real impact and contribute to our organization’s success.
KEY RESPONSIBILITIES:- Profile and optimize large‑scale AI training workloads (transformers, multimodal, diffusion, recommender systems) across multi‑node, multi‑GPU clusters.
- Identify bottlenecks in compute, memory, interconnects, and communication libraries (NCCL/RCCL, MPI), and deliver optimizations to maximize scaling efficiency.
- Collaborate with compiler/runtime teams to improve kernel performance, scheduling, and memory utilization.
- Develop and maintain benchmarks and traces representative of foundation model training workloads.
- Provide performance insights to AMD Instinct GPU architecture teams, informing hardware/software co‑design decisions for future architectures.
- Partner with framework teams (PyTorch, JAX, Tensor Flow) to upstream performance improvements and enable better scaling APIs.
- Present findings to cross‑functional teams and leadership, shaping both software and hardware roadmaps.
- Strong expertise in GPU tuning and optimization (CUDA, ROCm, or equivalent).
- Understanding of GPU microarchitecture (execution units, memory hierarchy, interconnects, warp scheduling).
- Hands‑on experience with distributed training frameworks and communication libraries (e.g., PyTorch DDP, Deep Speed, Megatron‑LM, NCCL/RCCL, MPI).
- Advanced Linux OS, container (e.g., Docker) and Git Hub skills.
- Proficiency in Python or C++ for performance‑critical development.
- Familiarity with large‑scale AI training infrastructure (NVLink, Infini Band, PCIe, cloud/HPC clusters).
- Experience in benchmarking methodologies, performance analysis/profiling (e.g., Nsight), performance monitoring tools.
- Experience scaling training to thousands of GPUs for foundation models a plus.
- Strong track record of optimizing large‑scale AI systems in cloud or HPC environments is desired.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).