Ollama
Configuration
Section titled “Configuration”| Variable | Default | Required |
|---|---|---|
OLLAMA_BASE_URL | http://localhost:11434 | No |
No API key is required — Ollama runs locally.
Gateway
Section titled “Gateway”curl -X POST http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "ollama/llama3", "messages": [{"role": "user", "content": "Hello!"}]}'Library
Section titled “Library”use llmg_providers::ollama::OllamaClient;use llmg_core::provider::Provider;
let client = OllamaClient::from_env();// orlet client = OllamaClient::new().with_base_url("http://my-server:11434");Features
Section titled “Features”- Chat completions (auto-converts to Ollama format)
- Embeddings
- Custom base URL for remote Ollama instances
Note:
OllamaClient::from_env()returnsSelfdirectly (notResult) since no API key is required. This differs from cloud providers likeOpenAiClient::from_env()which returnResult<Self, LlmError>.