Provider Overview
LLMG supports 70+ LLM providers organized into five categories:
OpenAI-Compatible
Section titled “OpenAI-Compatible”These providers implement the OpenAI API format natively. LLMG forwards requests with minimal transformation.
OpenAI, Groq, DeepInfra, Together AI, Fireworks AI, DeepSeek, Perplexity, SambaNova, Cerebras, NScale, xAI, Hyperbolic, Featherless AI, FriendliAI, OctoAI, OpenRouter, AIML, Chutes, NanoGPT, Z.AI, Anyscale.
Custom API
Section titled “Custom API”These providers have their own API format. LLMG translates between OpenAI format and the native format automatically.
Anthropic, Cohere, Mistral, AI21, Aleph Alpha, Meta Llama, MiniMax, and more.
Self-Hosted / Local
Section titled “Self-Hosted / Local”Run models locally on your own hardware.
Ollama, vLLM, LM Studio, Llamafile, Triton, Petals, oobabooga, Docker Runner, XInference, Custom LLM Server.
Cloud / Enterprise
Section titled “Cloud / Enterprise”Managed cloud ML platforms.
Azure OpenAI, Azure AI, AWS Bedrock, AWS SageMaker, Google Vertex AI, IBM WatsonX, Heroku, HuggingFace.
Specialized
Section titled “Specialized”Providers focused on specific tasks (audio, images, search, etc.).
GitHub Copilot, Stability AI, ElevenLabs, Runway, Firecrawl, and more.
See the full provider list for environment variables and configuration details.