Training Infrastructure Engineer
Germany, Pike County, Ohio, USA
Listed on 2026-02-16
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
Mirelo AI is building the next generation of creative tools by generating realistic sound, speech and music from video.
We develop cutting-edge foundational generative AI models that "unmute" silent video content and create custom, hyper-realistic audio for gaming, video platforms, and creators. Our technology empowers global storytellers to transform their content.
We recently closed a $41 million Seed round co-led by Andreessen Horowitz and Index Ventures with participation from Atlantic, and are rapidly expanding across Product, Engineering, Go-to-Market, and Growth.
About the RoleIn this role, you'll focus on the full training stack - profiling GPU behavior, debugging training pipelines, improving throughput, choosing the right parallelism strategies, and designing the infrastructure that lets us train models efficiently 'll work across cluster management, model training, efficient data pipelines for video and audio, inference and optimizing pytorch code. Your work will shape the foundation on which all of our generative models are built and iterated.
Key Responsibilities- Find ideal training strategies (parallelism approaches, precision trade-offs) for a variety of model sizes and compute loads
- Profile, debug, and optimize single and multi-GPU operations using tools like Nsight and stack trace viewers to understand what's actually happening at the hardware level
- Analyze and improve the whole training pipeline from start to end (efficient data storage, data loading, distributed training, checkpoint/artifact saving, logging, ...)
- Set up scalable systems for experiment tracking, data/model versioning, experiment insights.
- Design, deploy and maintain large-scale ML training clusters running SLURM for distributed workload orchestration
- Familiarity with the latest and most effective techniques in optimizing training and inference workloads-not from reading papers, but from implementing them
- Deep understanding of GPU memory hierarchy and computation capabilities-knowing what the hardware can do theoretically and what prevents us from achieving it
- Experience optimizing for both memory-bound and compute-bound operations and understanding when each constraint matters
- Expertise with efficient attention algorithms and their performance characteristics at different scales
- Experience in implementing custom GPU kernels and integrating them into PyTorch.
- Experience with diffusion and autoregressive models and understanding of their specific optimization challenges
- Familiarity with high-performance storage solutions (VAST, blob storage) and understanding of their performance characteristics for ML workloads
- Experience with managing SLURM clusters at scale
- Join at a pivotal moment. We've secured fresh funding and are gaining traction - now is when your contributions can make a real difference to our success.
- True ownership from day one. You'll have genuine autonomy and responsibility. Your ideas and work will directly shape our product and company direction.
- Competitive compensation and equity. We offer strong packages that ensure you share in the success you help create.
- Build for the next generation of creators. Be part of the innovation that will transform how creators work and thrive.
We welcome applications from all individuals, regardless of ethnic origin, gender, disability, religion or belief, age, or sexual orientation and identity.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).