Software Development Engineer, AI/ML, AWS Neuron, Model Inference
Listed on 2026-02-21
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
Job Description
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The AWS Neuron SDK is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. It includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX, enabling unparalleled inference and training performance.
The Inference Enablement and Acceleration team works across the stack from PyTorch to the hardware-software boundary, building systematic infrastructure, innovating new methods, and creating high‑performance kernels for ML functions. This role offers a unique opportunity at the intersection of machine learning, high‑performance computing, and distributed architectures, where you will shape the future of AI acceleration technology.
Key Job Responsibilities- Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
- Participate in all stages of the ML system development lifecycle including distributed computing‑based architecture design, implementation, performance profiling, hardware‑specific optimizations, testing and production deployment.
- Build infrastructure to systematically analyze and onboard multiple models with diverse architectures.
- Design and implement high‑performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
- Analyze and optimize system‑level performance across multiple generations of Neuron hardware.
- Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
- Implement optimizations such as fusion, sharding, tiling, and scheduling.
- Conduct comprehensive testing, including unit and end‑to‑end model testing with continuous deployment and releases through pipelines.
- Work directly with customers to enable and optimize their ML models on AWS accelerators.
- Collaborate across teams to develop innovative optimization techniques.
You will collaborate with a cross‑functional team of applied scientists, system engineers, and product managers to deliver state‑of‑the‑art inference capabilities for generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, shaping the future of Neuron’s inference stack across Amazon and the open‑source community, creating metrics, implementing automation, and resolving software defects. You will also build high‑impact solutions for a large customer base, participate in design discussions and code reviews, and communicate with internal and external stakeholders in a startup‑like development environment.
Aboutthe Team
The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged and impact is measurable. We emphasize collaboration, technical ownership, and continuous learning, supporting new members with mentorship and thorough code reviews. Our senior members provide one‑on‑one mentoring and strive to assign projects that help team members develop their engineering expertise for future complex tasks.
Basic Qualifications- Bachelor’s degree in computer science or equivalent.
- 5+ years of professional software development experience.
- 5+ years of design or architecture experience for new and existing systems.
- Fundamentals of machine learning and LLMs, their architecture, training and inference life cycles, and experience optimizing model execution.
- Software development experience in C++ and Python (experience in at least one language is required).
- Strong understanding of system performance, memory management, and parallel computing principles.
- Proficiency in debugging, profiling, and implementing best software engineering practices in large‑scale systems.
- Familiarity with PyTorch, JIT compilation, and AOT tracing.
- Familiarity with CUDA kernels or equivalent ML or low‑level kernels.
- Experience in performant kernel development such as…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).