×
Register Here to Apply for Jobs or Post Jobs. X

Software Development Engineer - AI​/ML, AWS Neuron, Multimodal Inference

Job in Los Angeles, Los Angeles County, California, 90079, USA
Listing for: Amazon
Full Time position
Listed on 2025-12-11
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below

Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon’s Inferentia and Trainium ML accelerators.

This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX, enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team is at the forefront of running a wide range of models and supporting novel architecture alongside maximizing their performance for AWS’s custom ML accelerators. Working across the stack from PyTorch to the hardware‑software boundary, our engineers build systematic infrastructure, innovate new methods and create high‑performance kernels for ML functions, ensuring every compute unit is fine‑tuned for optimal performance for our customers’ demanding workloads.

We combine deep hardware knowledge with ML expertise to push the boundaries of what’s possible in AI acceleration. As part of the broader Neuron organization, our team works across multiple technology layers—from frameworks and kernels to compiler, runtime, and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance.

This role offers a unique opportunity to work at the intersection of machine learning, high‑performance computing, and distributed architectures, where you’ll help shape the future of AI acceleration technology. You will architect and implement business‑critical features, mentor a brilliant team, and work closely with customers on model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.

Key

job responsibilities

In this role, you will:

  • Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
  • Participate in all stages of the ML system development lifecycle, including distributed computing‑based architecture design, implementation, performance profiling, hardware‑specific optimizations, testing, and production deployment.
  • Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.
  • Design and implement high‑performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
  • Analyze and optimize system‑level performance across multiple generations of Neuron hardware.
  • Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
  • Implement optimizations such as fusion, sharding, tiling, and scheduling.
  • Conduct comprehensive testing, including unit and end‑to‑end model testing with continuous deployment and releases through pipelines.
  • Work directly with customers to enable and optimize their ML models on AWS accelerators.
  • Collaborate across teams to develop innovative optimization techniques.
A day in the life

You will collaborate with a cross‑functional team of applied scientists, system engineers, and product managers to deliver state‑of‑the‑art inference capabilities for Generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron’s inference stack across Amazon and the Open Source Community. You’ll design and code solutions that drive efficiencies in software architecture, create metrics, implement automation, and resolve the root cause of software defects.

You will also build high‑impact solutions to deliver to our large customer base, participate in design discussions, conduct code reviews, and communicate with internal and external stakeholders. Work in a startup‑like development environment where…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary