Skip to content

Configuration

Everything starts with ossature.toml at the project root.

Minimal Config

[project]
name = "myproject"
version = "0.1.0"
spec_dir = "specs"

[output]
dir = "output"
language = "python"

[llm]
model = "anthropic:claude-sonnet-4-6"

The [llm] section with a model field is required. Everything else has defaults.

Project Section

[project]
name = "myproject"
version = "0.1.0"
spec_dir = "specs"       # where .smd and .amd files live
context_dir = "context"  # where user-provided context files live

Ossature discovers spec files automatically by scanning spec_dir recursively. You don't list them in the config.

Output Section

[output]
dir = "output"           # where generated code goes
language = "python"      # target language

The language field tells the LLM what language to generate. It's not limited to a fixed list, but you'll get best results with common languages like python, typescript, rust, go, lua, etc.

Build Section

[build]
max_fix_attempts = 3     # how many times to retry fixing a failed task
setup = "cargo init"     # optional: run before the first task
verify = "cargo check"   # optional: override default verification command
test = "cargo test"      # optional: override default test command

The setup command runs once before the first build task. Useful for project initialization that the LLM shouldn't handle.

The verify and test commands override what Ossature uses to check generated code. If not set, the LLM determines verification commands per task based on the language and project structure.

LLM Section

[llm]
model = "anthropic:claude-sonnet-4-6"

The model format is provider:model-name. Supported providers are anthropic and ollama.

Set your API key as an environment variable:

export ANTHROPIC_API_KEY="sk-ant-..."

Per-Role Overrides

You can use different models for different stages. This lets you use a stronger model for auditing and a faster one for fixing compilation errors.

[llm]
model = "anthropic:claude-sonnet-4-6"     # default for all roles
audit = "anthropic:claude-opus-4-6"       # spec review
planner = "anthropic:claude-sonnet-4-6"   # plan generation
build = "anthropic:claude-sonnet-4-6"     # code generation
fixer = "anthropic:claude-sonnet-4-6"     # fixing failed tasks
brief = "anthropic:claude-sonnet-4-6"     # brief generation
interface = "anthropic:claude-sonnet-4-6" # interface extraction

Any role that isn't explicitly set falls back to the default model.

Ollama (Local Models)

[llm]
model = "ollama:devstral-latest"
ollama_base_url = "http://localhost:11434/v1"   # optional, this is the default

You can mix providers. Use Ollama for code generation and Anthropic for auditing:

[llm]
model = "ollama:devstral-latest"
audit = "anthropic:claude-opus-4-6"
planner = "anthropic:claude-sonnet-4-6"
fixer = "anthropic:claude-sonnet-4-6"

Config Discovery

Ossature searches for ossature.toml by walking up from the current directory. Override with --config:

ossature build --config /path/to/ossature.toml

Full Example

Here's a config for a Lua game project using context files:

[project]
name = "math_quest"
version = "0.1.0"
spec_dir = "specs"
context_dir = "context"

[output]
dir = "output"
language = "lua"

[llm]
model = "anthropic:claude-opus-4-6"

And one for a Python CLI tool with mixed models:

[project]
name = "Spenny"
version = "0.1.0"
spec_dir = "specs"

[output]
dir = "output"
language = "python"

[llm]
model = "ollama:devstral-latest"
audit = "anthropic:claude-opus-4-6"
planner = "anthropic:claude-sonnet-4-6"
fixer = "anthropic:claude-sonnet-4-6"