Models & Providers
Dynamo is provider-agnostic. Use any LLM from any provider.
Switching Models
Use the interactive picker or switch directly:
/model # Opens interactive picker
/model opus # Switch to Claude Opus
/model gpt # Switch to GPT-5.4
/model llama # Switch to Ollama LlamaYour choice is saved to ~/.config/dynamo/preferences.json and restored on next launch.
Available Aliases
Anthropic
| Alias | Model |
|---|---|
opus | claude-opus-4-6 |
sonnet | claude-sonnet-4-6 |
haiku | claude-haiku-4-5 |
OpenAI
| Alias | Model |
|---|---|
gpt | gpt-5.4 |
mini | gpt-5.4-mini |
nano | gpt-5.4-nano |
Ollama
| Alias | Model |
|---|---|
llama | llama4 |
qwen | qwen3.5 |
qwen-coder | qwen3.5-coder |
Per-Phase Models
Mix providers per workflow phase in dynamo.yaml:
ai:
models:
interactive: "anthropic/claude-sonnet-4-6"
planning: "anthropic/claude-opus-4-6"
implementation: "openai/gpt-5.3-codex"
audit: "openai/gpt-5.4-mini"
docs: "ollama/llama4"Custom Providers
Add any OpenAI-compatible endpoint:
ai:
providers:
deepseek:
type: "openai-compatible"
base_url: "https://api.deepseek.com/v1"
api_key_env: "DEEPSEEK_API_KEY"Token Display
After each response, Dynamo shows token usage:
tokens: 1,234 in · 567 out · 89 reasoning