×
Register Here to Apply for Jobs or Post Jobs. X

Backend LLM Engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: LiteLLM
Full Time position
Listed on 2026-02-06
Job specializations:
  • Software Development
    AI Engineer, Backend Developer, Python, Software Engineer
Job Description & How to Apply Below

TLDR

LiteLLM is an open-source LLM Gateway with 34K+ stars on Git Hub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking our 6th Engineer focused on owning ‘excellence’ for unified API’s across Core LLM’s (openai/gemini/anthropic models).

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, Vertex

AI, Cohere, Anthropic) in the OpenAI format

We just hit $6M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

Why do companies use LiteLLM enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)

What you will be working on

Skills: Python, LLM APIs, FastAPI, High-throughput/low-latency

As the Backend LLM Engineer, you’ll be responsible for ensuring LiteLLM unifies the format for calling LLM APIs in the broader OpenAI + Anthropic spec. This involves writing transformations to convert API requests from OpenAI/Anthropic spec to various LLM provider formats, building provider-agnostic unification functionality (e.g. session management across non-openai models for /v1/responses API, etc.). You’ll work directly with the CEO and CTO on critical projects including:

  • Adding support for Anthropic and Bedrock Anthropic ‘thinking’ parameter
  • Handling provider-specific quirks like OpenAI o-1 streaming limitations
  • Maintaining ‘excellent’ unified API’s across /v1/messages, /v1/responses, /chat/completions for OpenAI/Gemini/Anthropic models across Azure, OpenAI API, Bedrock Invoke, Bedrock Converse, Vertex AI, Google AI Studio
  • Implementing cost tracking and logging for Anthropic API
What is our tech stack

The tech stack includes Python, FastAPI, Redis, Postgres.

Who we are looking for
  • 1-2 years of backend/full-stack experience with production systems
  • Passion for open source and user engagement
  • Experience working with the OpenAI api (understand the difference between /chat/completions and /responses, and can speak to API-specific nuances)
  • Strong work ethic and ability to thrive in small teams
  • Eagerness to talk to users and help solve real problems
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary