×
Register Here to Apply for Jobs or Post Jobs. X

Member of Technical Staff, GPU Optimization

Job in New York, New York County, New York, 10261, USA
Listing for: Mirage
Full Time position
Listed on 2026-01-05
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Location: New York

Mirage is the leading AI short-form video company. We’re building full-stack foundation models and products that redefine video creation, production and editing. Over 20 million creators and businesses use Mirage’s products to reach their full creative and commercial potential.

We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you’ll have an opportunity to have an outsized impact on our products and our company's culture.

Our Products
  • Captions
  • Mirage Studio
Our Technology
  • AI Research @ Mirage
  • Mirage Model Announcement
  • Seeing Voices (white-paper)
Press Coverage
  • Tech Crunch
  • Lenny’s Podcast
  • Forbes AI 50
  • Fast Company
Our Investors
  • Index Ventures
  • Kleiner Percykin
  • Sequoia Capital
  • Andreessen Horowitz
  • Uncommon Projects
  • Kevin Systrom
  • Mike Krieger
  • Lenny Rachitsky
  • Antoine Martin
  • Julie Zhuo
  • Ben Rubin
  • Jaren Glover
  • SVAngel
  • 20VC
  • Ludlow Ventures
  • Chapter One

Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square).

We do not work with third-party recruiting agencies, please do not contact us.

About the Role

As an expert in making AI models run fast—really fast—you live at the intersection of CUDA, PyTorch, and generative models, and get excited by the idea of squeezing every last bit of performance out of modern GPUs. You will have the opportunity to turn our cutting-edge video generation research into scalable, production‑grade systems. From designing custom CUDA or Triton kernels to profiling distributed inference pipelines, you'll work across the full stack to make sure our models train and serve at peak performance.

Key Responsibilities
  • Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, for throughput, latency, and memory efficiency on NVIDIA GPUs
  • Design, implement, and benchmark custom CUDA and Triton kernels for performance‑critical operations
  • Integrate low‑level optimizations into PyTorch‑based codebases, including custom ops, low‑precision formats, and Torch Inductor passes
  • Profile and debug the entire stack—from kernel launches to multi‑GPU I/O paths—using Nsight, nvprof, PyTorch Profiler, and custom tools
  • Work closely with colleagues to co‑design model architectures and data pipelines that are hardware‑friendly and maintain state‑of‑the‑art quality
  • Stay on the cutting edge of GPU and compiler tech (e.g., Hopper features, CUDA Graphs, Triton, Flash Attention, and more) and evaluate their impact
  • Collaborate with infrastructure and backend experts to improve cluster orchestration, scaling strategies, and observability for large experiments
  • Provide clear, data‑driven insights and trade‑offs between performance, quality, and cost
  • Contribute to a culture of fast iteration, thoughtful profiling, and performance‑centric design
Required Qualifications
  • Bachelor's degree in Computer Science, Electrical / Computer Engineering, or equivalent practical experience
  • 3+ years of hands‑on experience writing and optimizing CUDA kernels for production ML workloads
  • Deep understanding of GPU architecture: memory hierarchies, warp scheduling, tensor cores, register pressure, and occupancy tuning
  • Strong Python skills and familiarity with PyTorch internals, Torch Script, and distributed data‑parallel training
  • Proven track record profiling and accelerating large‑scale training and inference jobs (e.g., mixed precision, kernel fusion, custom collectives)
  • Comfort working in Linux environments with modern CI / CD, containerization, and cluster managers such as Kubernetes
Preferred Qualifications
  • Advanced degree (MS / PhD) in Computer Science, Electrical / Computer Engineering, or related field
  • Experience with multi‑modal AI systems, particularly video generation or computer vision models
  • Familiarity with distributed training frameworks (Deep Speed, Fair Scale, Megatron) and model parallelism techniques
  • Knowledge of compiler optimization techniques and experience with MLIR, XLA, or similar frameworks
  • Experience with cloud infrastructure (AWS, GCP, Azure) and GPU cluster management
  • Ability to translate research goals into performant code,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary