Quick Start
Start the Gateway
Section titled “Start the Gateway”Install and run the gateway:
cargo install llmg-gatewayOPENAI_API_KEY=sk-... llmg-gatewaySend a Request
Section titled “Send a Request”curl -X POST http://localhost:8080/v1/chat/completions \ -H "Authorization: Bearer any-token" \ -H "Content-Type: application/json" \ -d '{ "model": "openai/gpt-4", "messages": [{"role": "user", "content": "Hello!"}] }'The model field uses the provider/model format. Some common examples:
| Model ID | Provider |
|---|---|
openai/gpt-4 | OpenAI |
anthropic/claude-3-opus-20240229 | Anthropic |
groq/llama3-70b-8192 | Groq |
ollama/llama3 | Ollama (local) |
deepseek/deepseek-chat | DeepSeek |
Use as a Library
Section titled “Use as a Library”use llmg_providers::openai::OpenAiClient;use llmg_core::provider::Provider;use llmg_core::types::{ChatCompletionRequest, Message};
#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = OpenAiClient::from_env()?;
let request = ChatCompletionRequest { model: "gpt-4".to_string(), messages: vec![Message::User { content: "Hello!".to_string(), name: None, }], ..Default::default() };
let response = client.chat_completion(request).await?; println!("{:?}", response); Ok(())}