Skip to content

Quick Start

Install and run the gateway:

Terminal window
cargo install llmg-gateway
OPENAI_API_KEY=sk-... llmg-gateway
Terminal window
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer any-token" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'

The model field uses the provider/model format. Some common examples:

Model IDProvider
openai/gpt-4OpenAI
anthropic/claude-3-opus-20240229Anthropic
groq/llama3-70b-8192Groq
ollama/llama3Ollama (local)
deepseek/deepseek-chatDeepSeek
use llmg_providers::openai::OpenAiClient;
use llmg_core::provider::Provider;
use llmg_core::types::{ChatCompletionRequest, Message};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = OpenAiClient::from_env()?;
let request = ChatCompletionRequest {
model: "gpt-4".to_string(),
messages: vec![Message::User {
content: "Hello!".to_string(),
name: None,
}],
..Default::default()
};
let response = client.chat_completion(request).await?;
println!("{:?}", response);
Ok(())
}