Configuration
Configure Hanzo Dev with config.toml, environment variables, and CLI flags.
Hanzo Dev supports several mechanisms for setting config values (highest precedence first):
- CLI flags such as
--model o3. - Generic
-cflag takingkey=valuepairs:dev -c model=o3. - Config file at
$HANZO_HOME/config.toml(default~/.hanzo/config.toml). Legacy paths~/.code/config.tomland~/.codex/config.tomlare also read for backwards compatibility.
The -c flag uses TOML value syntax. Keys can contain dots for nested values:
dev -c model_providers.openai.wire_api=chat
dev -c shell_environment_policy.include_only='["PATH", "HOME", "USER"]'If the value cannot be parsed as valid TOML, it is treated as a string. So -c model=o3 and -c model='"o3"' are equivalent.
See Example Configuration for a fully annotated sample file.
Core Model Selection
model = "gpt-5.1-codex" # Primary model
review_model = "gpt-5.1-codex" # Model for /review
model_provider = "openai" # Provider id from [model_providers]Optional manual overrides (auto-detected when unset):
# model_context_window = 128000
# model_auto_compact_token_limit = 0
# tool_output_token_limit = 10000Reasoning and Verbosity
model_reasoning_effort = "medium" # minimal | low | medium | high | xhigh
model_reasoning_summary = "auto" # auto | concise | detailed | none
model_verbosity = "medium" # low | medium | high (Responses API models)
model_supports_reasoning_summaries = falseModel Providers
Override or add providers under [model_providers]. Providers must expose an OpenAI-compatible HTTP API (Chat Completions or Responses).
[model_providers.openai-chat]
name = "OpenAI Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat" # "chat" or "responses"Ollama (Local)
model = "llama3"
model_provider = "ollama"
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"Azure OpenAI
[model_providers.azure]
name = "Azure"
base_url = "https://YOUR_PROJECT.openai.azure.com/openai"
wire_api = "responses"
query_params = { api-version = "2025-04-01-preview" }
env_key = "AZURE_OPENAI_API_KEY"Proxy (Anthropic, Gemini, etc.)
model = "claude-opus-4.6"
model_provider = "claude-proxy"
[model_providers.claude-proxy]
name = "Claude (proxy)"
base_url = "http://127.0.0.1:8000/v1"
env_key = "ANTHROPIC_API_KEY"
wire_api = "responses"
requires_openai_auth = falseCustom Headers
[model_providers.example]
http_headers = { "X-Custom" = "value" }
env_http_headers = { "X-Auth" = "AUTH_TOKEN_ENV" }Provider Options
| Field | Description |
|---|---|
name | Display name in the UI |
base_url | API base URL |
env_key | Required env var for bearer token |
wire_api | "chat" or "responses" |
requires_openai_auth | Require OpenAI auth flow (default: false) |
query_params | Extra URL query parameters |
http_headers | Static HTTP headers |
env_http_headers | Headers populated from env vars |
request_max_retries | Max request retries (default: 4) |
stream_max_retries | Max stream retries (default: 5) |
stream_idle_timeout_ms | Stream idle timeout (default: 300000) |
Approval and Sandbox
approval_policy = "on-request" # untrusted | on-failure | on-request | never
sandbox_mode = "read-only" # read-only | workspace-write | danger-full-accessSee Sandbox for detailed sandbox configuration.
Workspace Write Settings
[sandbox_workspace_write]
writable_roots = [] # Additional writable paths beyond CWD
network_access = false # Allow outbound network
exclude_tmpdir_env_var = false # Exclude $TMPDIR
exclude_slash_tmp = false # Exclude /tmpShell Environment Policy
Controls which environment variables are visible to spawned processes.
[shell_environment_policy]
inherit = "all" # all | core | none
ignore_default_excludes = true # Skip filtering KEY/SECRET/TOKEN vars
exclude = [] # Glob patterns to remove (e.g. "AWS_*")
set = {} # Explicit key/value overrides
include_only = [] # Whitelist (if non-empty, keep only these)
experimental_use_profile = false # Run via user shell profileMCP Servers
# STDIO transport
[mcp_servers.docs]
command = "docs-server"
args = ["--port", "4000"]
env = { "API_KEY" = "value" }
env_vars = ["ANOTHER_SECRET"]
cwd = "/path/to/server"
startup_timeout_sec = 10.0
tool_timeout_sec = 60.0
enabled_tools = ["search", "summarize"]
disabled_tools = ["slow-tool"]
# Streamable HTTP transport
[mcp_servers.github]
url = "https://github-mcp.example.com/mcp"
bearer_token_env_var = "GITHUB_TOKEN"
http_headers = { "X-Example" = "value" }
env_http_headers = { "X-Auth" = "AUTH_ENV" }History and File Opener
[history]
persistence = "save-all" # save-all | none
# max_bytes = 5242880 # Oldest entries trimmed when exceeded
file_opener = "vscode" # vscode | vscode-insiders | windsurf | cursor | noneTUI and Notifications
[tui]
notifications = false # true | false | ["agent-turn-complete", "approval-requested"]
animations = true
hide_agent_reasoning = false
show_raw_agent_reasoning = false
disable_paste_burst = false
# External notifier (argv array)
# notify = ["notify-send", "Hanzo Dev"]Notices
[notice]
# hide_full_access_warning = true
# hide_rate_limit_model_nudge = trueInstruction Overrides
# developer_instructions = "" # Injected before AGENTS.md
# instructions = "" # Legacy override (prefer AGENTS.md)
# compact_prompt = "" # History compaction prompt override
# experimental_instructions_file = "path.txt" # Override base instructions from file
# experimental_compact_prompt_file = "path.txt" # Compact prompt from fileAuthentication
cli_auth_credentials_store = "file" # file | keyring | auto
chatgpt_base_url = "https://chatgpt.com/backend-api/"
# forced_chatgpt_workspace_id = ""
# forced_login_method = "chatgpt" # chatgpt | api
mcp_oauth_credentials_store = "auto" # auto | file | keyringProject Documentation
project_doc_max_bytes = 32768 # Max bytes from AGENTS.md
project_doc_fallback_filenames = [] # Fallbacks when AGENTS.md is missingTools and Features
[tools]
web_search = false
view_image = true
[features]
unified_exec = false
apply_patch_freeform = false
view_image_tool = true
web_search_request = false
enable_experimental_windows_sandbox = false
skills = true
# js_repl = false
# js_repl_tools_only = falseProfiles
Named configuration presets:
profile = "default"
[profiles.default]
model = "gpt-5.1-codex-max"
model_provider = "openai"
approval_policy = "on-request"
sandbox_mode = "read-only"
model_reasoning_effort = "medium"Projects
Mark specific worktrees as trusted:
[projects."/absolute/path/to/project"]
trust_level = "trusted"OpenTelemetry
[otel]
log_user_prompt = false
environment = "dev"
exporter = "none" # none | otlp-http | otlp-grpc
# [otel.exporter."otlp-http"]
# endpoint = "https://otel.example.com/v1/logs"
# protocol = "binary" # binary | json
# [otel.exporter."otlp-http".headers]
# "x-otlp-api-key" = "${OTLP_TOKEN}"JSON Schema
The generated JSON Schema for config.toml is at codex-rs/core/config.schema.json in the source repository.
Last updated on