Providers
Kyber supports multiple LLM providers through direct API integration. You can pin a specific provider so it won’t fall back to another key when multiple are configured.
Supported providers
| Provider | Config key | Env var for API key | Get a key |
|---|---|---|---|
| OpenRouter | openrouter | KYBER_PROVIDERS__OPENROUTER__API_KEY | openrouter.ai/keys |
| Anthropic | anthropic | KYBER_PROVIDERS__ANTHROPIC__API_KEY | console.anthropic.com |
| OpenAI | openai | KYBER_PROVIDERS__OPENAI__API_KEY | platform.openai.com |
| Google Gemini | gemini | KYBER_PROVIDERS__GEMINI__API_KEY | aistudio.google.com |
| DeepSeek | deepseek | KYBER_PROVIDERS__DEEPSEEK__API_KEY | platform.deepseek.com |
| Groq | groq | KYBER_PROVIDERS__GROQ__API_KEY | console.groq.com |
Any OpenAI-compatible endpoint (Ollama, vLLM, Together, etc.) can be added as a custom provider.
Configuration
API keys go in ~/.kyber/.env. Provider settings (model, base URL) go in ~/.kyber/config.json:
# ~/.kyber/.env
KYBER_PROVIDERS__OPENROUTER__API_KEY=sk-or-v1-your-key-here{
"agents": {
"defaults": {
"provider": "openrouter"
}
},
"providers": {
"openrouter": {
"chatModel": "google/gemini-3-flash-preview",
"taskModel": "google/gemini-3-flash-preview"
}
}
}Chat vs task models
Each provider supports separate models for different roles:
| Field | Purpose |
|---|---|
chatModel | Used for conversational replies |
taskModel | Used for background workers/tasks |
model | Legacy fallback if chatModel/taskModel aren’t set |
You can also use entirely different providers for chat and tasks:
{
"agents": {
"defaults": {
"chatProvider": "openrouter",
"taskProvider": "deepseek"
}
}
}If chatProvider or taskProvider is empty, both fall back to the main provider setting.
Using OpenRouter (recommended)
OpenRouter gives you access to models from every major provider through a single API key. This is the default and easiest option.
# ~/.kyber/.env
KYBER_PROVIDERS__OPENROUTER__API_KEY=sk-or-v1-xxx{
"agents": {
"defaults": {
"provider": "openrouter"
}
},
"providers": {
"openrouter": {
"chatModel": "anthropic/claude-sonnet-4-20250514"
}
}
}You can use any model available on OpenRouter by setting the model field to the OpenRouter model ID.
Using a provider directly
To use a provider’s API directly (lower latency, no middleman):
# ~/.kyber/.env
KYBER_PROVIDERS__DEEPSEEK__API_KEY=sk-xxx{
"agents": {
"defaults": {
"provider": "deepseek"
}
},
"providers": {
"deepseek": {
"chatModel": "deepseek-chat"
}
}
}Custom providers (Ollama, vLLM, etc.)
Any OpenAI-compatible endpoint can be added as a custom provider. This works for Ollama, vLLM, Together, or any self-hosted model server:
{
"agents": {
"defaults": {
"provider": "my-local"
}
},
"providers": {
"custom": [
{
"name": "my-local",
"apiBase": "http://localhost:11434/v1",
"chatModel": "llama3"
}
]
}
}For custom providers that need an API key, add it to .env:
KYBER_CUSTOM_PROVIDER_MY_LOCAL_API_KEY=your-keyCustom providers can also be added directly from the dashboard UI.
Provider fallback
When provider is set explicitly, Kyber only uses that provider’s API key. If you leave provider empty, Kyber checks keys in this order: OpenRouter → DeepSeek → Anthropic → OpenAI → Gemini → Groq, and uses the first one it finds.
Retries
The provider handles automatic retries with exponential backoff for transient errors like rate limits, timeouts, and malformed upstream responses.