Software Engineer, Networking - Inference
Listed on 2026-01-07
-
Software Development
Software Engineer, Cloud Engineer - Software
Software Engineer, Networking - Inference
Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises and developers alike to use and access our state‑of‑the‑art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference.
About the RoleWe’re looking for a senior engineer to design and build the load balancer that will sit at the very front of our research inference stack - routing the world’s largest AI models with millisecond precision and bulletproof reliability. This system will serve research jobs where requests must stay “sticky” to the same model instance for hours or days and where even subtle errors can directly degrade model performance.
Inthis role, you will:
- Architect and build the gateway / network load balancer that fronts all research jobs, ensuring long‑lived connections remain consistent and performant.
- Design traffic stickiness and routing strategies that optimize for both reliability and throughput.
- Instrument and debug complex distributed systems — with a focus on building world‑class observability and debuggability tools (distributed tracing, logging, metrics).
- Collaborate closely with researchers and ML engineers to understand how infrastructure decisions impact model performance and training dynamics.
- Own the end‑to‑end system lifecycle: from design and code to deploy, operate, and scale.
- Work in an outcome‑oriented environment where everyone contributes across layers of the stack, from infra plumbing to performance tuning.
- Have deep experience designing and operating large‑scale distributed systems, particularly load balancers, service gateways, or traffic routing layers.
- Have 5+ years of experience designing in theory for and debugging in practice for the algorithmic and systems challenges of consistent hashing, sticky routing, and low‑latency connection management.
- Have 5+ years of experience as a software engineer and systems architect working on high‑scale, high‑reliability infrastructure.
- Have a strong debugging mindset and enjoy spending time in tracing, logs, and metrics to untangle distributed failures.
- Are comfortable writing and reviewing production code in Rust or similar systems languages (C/C++, Java, Go, Zig, etc).
- Have operated in big tech or high‑growth environments and are excited to apply that experience in a faster‑moving setting.
- Take ownership of problems end‑to‑end and are excited to build something foundational to how our models interact with the world.
- Experience with gateway or load balancing systems (e.g., Envoy, gRPC, custom LB implementations).
- Familiarity with inference workloads (e.g., reinforcement learning, streaming inference, KV cache management, etc).
- Exposure to debugging and operational excellence practices in large production environments.
OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US‑based candidates. For…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).