Software Engineer - AI/ML, AWS Neuron Apps
Listed on 2025-12-01
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer, Data Scientist, Data Engineer
Overview
Shape the Future of AI Accelerators at AWS Neuron. Join the elite team behind AWS Neuron—the software stack powering AWS's next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team, you ll be at the forefront of deploying and optimizing some of the world s most sophisticated AI models at unprecedented scale.
What You ll Impact- Pioneer distributed inference solutions for industry-leading LLMs such as GPT, Llama, Qwen
- Optimize breakthrough language and vision generative AI models
- Collaborate directly with silicon architects and compiler teams to push the boundaries of AI acceleration
- Drive performance benchmarking and tuning that directly impacts millions of inference calls globally
- You will drive the evolution of distributed AI at AWS Neuron
- You ll develop the bridge between ML frameworks including PyTorch, JAX and AI hardware. This isn t just about optimization—it's about revolutionizing how AI models run at scale.
- Spearhead distributed inference architecture for PyTorch and JAX using XLA
- Engineer breakthrough performance optimizations for AWS Trainium and Inferentia
- Develop ML tools to enhance LLM accuracy and efficiency
- Transform complex tensor operations into highly optimized hardware implementations
- Pioneer benchmarking methodologies that shape next-gen AI accelerator design
- Direct influence on AWS s AI infrastructure used by thousands of ML applications
- Full-stack optimization from high-level frameworks to hardware-specific primitives
- Creation of tools and frameworks that define industry standards for ML deployment
- Collaboration with open-source ML communities and hardware architecture teams
- Deep expertise in Python and ML framework internals
- Strong understanding of distributed systems and ML optimization
- Passion for performance tuning and system architecture
AWS Neuron focuses on distributed inference for AI workloads, with emphasis on large language model optimization, architecture-aware performance tuning, and scalable deployment.
Basic Qualifications- 3+ years of computer science fundamentals (object-oriented design, data structures, algorithm design, problem solving and complexity analysis)
- 3+ years of programming experience using Python or C++ and Py Torch
- Experience with AI acceleration via quantization, parallelism, model compression, batching, KV caching, vllm serving
- Experience with accuracy debugging & tooling, performance benchmarking of AI accelerators
- Fundamentals of machine learning and deep learning models, their architecture, training and inference life cycles, with work experience on optimizations for improving model execution
- Bachelor s degree in computer science or equivalent
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you require a workplace accommodation during the application and hiring process, including support for the interview or onboarding, please visit the accommodations page.
Our compensation reflects the cost of labor across US geographic markets. The base pay ranges and other compensation details are provided for context. This position will remain posted until filled. Applicants should apply via our internal or external career site.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).