Skip to content

Introduction

LLMG is a high-performance LLM gateway written in Rust. It provides a single, unified OpenAI-compatible API that routes requests to 70+ LLM providers including OpenAI, Anthropic, Azure, Google Vertex AI, AWS Bedrock, Groq, Mistral, and many more.

  • Single API — Use one OpenAI-compatible endpoint for every provider.
  • Written in Rust — Fast, safe, and low resource usage.
  • 70+ Providers — OpenAI, Anthropic, Azure, Groq, Mistral, Cohere, DeepSeek, Ollama, OpenRouter, and dozens more.
  • Library + Gateway — Use it as a Rust library (llmg-providers) or deploy the HTTP gateway (llmg-gateway).
  • Feature-Gated — Compile only the providers you need.
  • SSE Streaming — Real-time streaming responses across all providers.
  • Rig Framework — Drop-in integration with the Rig agent framework.

LLMG is split into three crates:

CratePurpose
llmg-coreShared types, traits, and error handling
llmg-providersProvider implementations (feature-gated)
llmg-gatewayHTTP gateway server (Axum-based)

Requests use the provider/model routing format (e.g. openai/gpt-4, anthropic/claude-3-opus-20240229). The gateway parses the prefix, selects the right provider, and forwards the request.