Docs/Core Concepts/Context Engine

Context Engine

Dynamo indexes your project locally. The assistant reads that index at chat time so answers point at real types, real files, and real call chains.

> Heads up on data flow: the index is built on your machine without network calls, but every chat turn still sends your prompts and the indexed snippets the model retrieves to whichever LLM provider you've configured. Local-first applies to the index, not to inference.

How it works

  • Local, deterministic index — a plain parser catalogs every class, method, scene, prefab, singleton, and input binding. The indexer itself doesn't call an LLM and doesn't make network requests.
  • Ready-made subsystem summaries — rendered from the index as markdown. The assistant reads them; so can you.
  • Structural answers in one call — "how does X work", "what depends on Y", "trace the Z pipeline" come back complete: call chains, file paths, summary context inline.
  • Handles vague questions — no need to name the exact class. The engine picks the one you probably meant.
  • Full type details up front — asking about a class returns its fields, methods, and call graph together. No follow-up drill.
  • Works across providers — the engine is designed to give a consistent experience on Claude, GPT, Gemini, OpenRouter, and Ollama. Actual results vary by model capability and tool-calling fidelity; smaller and local models may converge less reliably on complex prompts.

Benchmark

17 Unity comprehension questions, run on the same project with context on and off.

ModelConfigTool callsCost
gpt-5.4-minino context286$0.66
gpt-5.4-minicontext123$0.33
Claude Haiku 4.5no context226$1.13
Claude Haiku 4.5context38$0.26

> Caveat first: these are n=1 directional numbers from a single Unity project, not a statistically-averaged benchmark. Re-runs vary by ~$0.05 and 1–2 tool-cap-passes. The ratios are directional; absolute dollars depend on provider pricing and the indexed surface of your project. Validate against your own codebase before drawing conclusions.

In this run, context cut tool calls by 57–83% with proportional cost savings — gpt-5.4-mini dropped 50% in cost, Haiku 4.5 dropped 77%. Same questions, same project, fewer round trips: answers came from indexed lookups instead of multi-step file drilling.

Context-on runs were also more accurate on structural questions. "List the singletons" and "which MonoBehaviours are in assembly X" returned correct counts and classifications when the engine was on; without it, the assistant sometimes mislabelled services, miscounted types, or cited file paths that didn't exist. Context narrows hallucination by serving deterministic indexed data for type/scene/singleton lookups, but doesn't fully eliminate it.

Configure

The context engine is on by default. Tune it in dynamo.yaml:

yaml
ai:
  context:
    enabled: true
    auto_index: true
    auto_kb: true
    kb_model: ''         # empty = use chat model
    cache:
      type: none         # or 'vcs' to share via version control

Use /context in the REPL to change settings live.