Docs/Core Concepts/Models & Providers

Models & Providers

Dynamo is provider-agnostic. Use any LLM from any provider.

Switching Models

Use the interactive picker or switch directly:

bash
/model          # Opens interactive picker
/model opus     # Switch to Claude Opus
/model gpt     # Switch to GPT-5.4
/model llama    # Switch to Ollama Llama

Your choice is saved to ~/.config/dynamo/preferences.json and restored on next launch.

Available Aliases

Anthropic

AliasModel
opusclaude-opus-4-6
sonnetclaude-sonnet-4-6
haikuclaude-haiku-4-5

OpenAI

AliasModel
gptgpt-5.4
minigpt-5.4-mini
nanogpt-5.4-nano

Ollama

AliasModel
llamallama4
qwenqwen3.5
qwen-coderqwen3.5-coder

Per-Phase Models

Mix providers per workflow phase in dynamo.yaml:

yaml
ai:
  models:
    interactive: "anthropic/claude-sonnet-4-6"
    planning: "anthropic/claude-opus-4-6"
    implementation: "openai/gpt-5.3-codex"
    audit: "openai/gpt-5.4-mini"
    docs: "ollama/llama4"

Custom Providers

Add any OpenAI-compatible endpoint:

yaml
ai:
  providers:
    deepseek:
      type: "openai-compatible"
      base_url: "https://api.deepseek.com/v1"
      api_key_env: "DEEPSEEK_API_KEY"

Token Display

After each response, Dynamo shows token usage:

tokens: 1,234 in · 567 out · 89 reasoning