Technical Program Manager – Cluster Orchestration & Model Benchmarking
Listed on 2026-01-02
-
IT/Tech
Systems Engineer, IT Project Manager
Technical Program Manager – Cluster Orchestration & Model Benchmarking
Join to apply for the Technical Program Manager – Cluster Orchestration & Model Benchmarking role at Core Weave
.
Core Weave is The Essential Cloud for AI™. Built for pioneers by pioneers, Core Weave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, Core Weave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, Core Weave became a publicly traded company (Nasdaq: CRWV) in March 2025.
AboutThe Role
Core Weave’s AI/ML Platform Services organization is responsible for the orchestration layer that schedules and manages AI workloads across Core Weave’s GPU-accelerated infrastructure. As a Technical Program Manager focused on orchestration and model benchmarking, you will drive programs that define how large‑scale AI workloads are scheduled, executed, and evaluated for performance and cost efficiency.
You’ll partner with engineering, infrastructure, and product teams to evolve Core Weave’s orchestration systems—including Slurm‑on‑Kubernetes (SUNK) and future orchestrators—while building robust benchmarking and observability frameworks that help customers and internal teams compare model performance, runtime efficiency, and GPU utilization across environments.
This role is ideal for someone who thrives at the intersection of distributed systems and AI infrastructure, has deep technical fluency in workload orchestration or scheduling, and excels at building the operational structure and visibility required to scale complex, high‑throughput systems.
In this role, the TPM will- Drive end‑to‑end program management for cluster orchestration initiatives—spanning SUNK, Kubernetes, and emerging workload schedulers.
- Lead cross‑functional efforts to deliver next‑generation cluster orchestration capabilities for distributed AI training and inference workloads.
- Partner with engineering and product to define roadmaps for cluster utilization, scheduling efficiency, preemption logic, multi‑tenant fairness, and workload resilience.
- Own the execution of model benchmarking programs—establishing frameworks, datasets, and metrics to measure model performance, throughput, latency, and cost across hardware types and orchestration environments.
- Develop and scale processes for cross‑team dependency management, performance testing, and release management—owning external release management for SUNK and other cluster orchestrators, including planning, coordination, and rollout of customer‑facing updates.
- Collaborate with infrastructure and Dev Ops teams to ensure orchestration systems meet Core Weave’s reliability and scalability goals.
- Build program dashboards, success metrics, and feedback loops to improve workload scheduling efficiency, GPU and cluster utilization, and time‑to‑deployment.
- Create strong communication channels between AI Platform Engineering, Infrastructure, and Product to align roadmap priorities and deliver predictable, high‑impact outcomes.
- Bachelor’s degree in a technical field or equivalent experience.
- 8+ years of technical program management experience in distributed systems, cloud infrastructure, or ML/AI platforms.
- Proven success leading programs involving large‑scale orchestration or scheduling systems (e.g., Kubernetes, Ray, Slurm, Kueue, or proprietary systems).
- Strong technical fluency in distributed computing, job scheduling, Kubernetes orchestration, and benchmarking methodologies.
- Demonstrated ability to define success metrics and drive measurable improvements in performance, reliability, or efficiency.
- Exceptional communication and collaboration skills, with a track record of aligning multiple teams and stakeholders on complex technical initiatives.
- Experience with model benchmarking frameworks, profiling tools, or distributed test harnesses (e.g., MLPerf, vLLM benchmarks, custom evaluation pipelines).
- Understanding of GPU types, model parallelism, and distributed…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).