OpenCode: Open-Source AI Coding Agent
Portfolio Summary
| Field | Value |
|---|---|
PRJ ID |
PRJ-opencode |
Owner |
Evan Rosado |
Priority |
P0 — Force Multiplier (Primary AI Coding Platform) |
Status |
Initial Setup |
Goal |
Provider-agnostic AI coding agent with full local + cloud LLM flexibility |
Platform |
OpenCode TUI (sst/opencode, MIT license) |
GitHub |
|
Website |
|
Version |
v1.3.15+ (multiple releases per day) |
License |
MIT |
Config Home |
|
Project Config |
|
Providers |
6 configured (Ollama, DeepSeek, GitHub Copilot, OpenAI, Gemini, Claude) |
Agents |
4 built-in (Build, Plan, General, Explore) + custom |
Skills |
Cross-compatible with |
Themes |
Catppuccin Mocha (primary), 11+ built-in |
Plugins |
JS/TS event-driven plugin system |
MCP |
Local (stdio) + Remote (SSE/streamable-http) + OAuth |
Why Open Source
Claude Code is a known quantity — battle-tested, deeply configured, and productive. Moving to OpenCode is not about abandoning what works. It is about:
-
Provider freedom — Not locked to one vendor. Ollama local, DeepSeek for cost, Copilot from existing subscription, Claude/GPT for heavy lifting
-
Cost control — Local models on RTX 5090 (24GB VRAM) for routine tasks, cloud APIs only when local falls short
-
Full transparency — MIT-licensed codebase. Every tool call, every prompt, every decision is auditable
-
Cross-compatibility — OpenCode reads
.claude/skills/directories. Migration preserves existing investments -
Plugin ecosystem — JS/TS plugins with full event hook system vs bash-only hooks
-
Desktop + TUI + Remote — Client/server architecture enables phone-based control, native desktop app, and terminal workflow
Strategic Position
| Use Case | Platform | Rationale |
|---|---|---|
Documentation work (domus-*) |
OpenCode + Ollama local |
Free, offline-capable, fast for AsciiDoc editing |
Complex refactoring |
OpenCode + Claude API |
Opus-level reasoning when needed, pay per token |
Code generation |
OpenCode + DeepSeek |
Strong coder at fraction of the cost |
Quick edits |
OpenCode + GitHub Copilot |
Already paying for Pro, fast completions |
Multi-file analysis |
OpenCode + Gemini 2.5 |
Large context window, good at code review |
Fallback / comparison |
OpenCode + OpenAI GPT-4.1 |
Second opinion, different training data |
Architecture Overview
Client/Server Model
OpenCode runs as a client/server architecture. The server manages LLM sessions, tool execution, and state. The client renders the TUI. This separation enables:
-
Remote operation — Control from a phone or another machine
-
Session persistence — Sessions survive client disconnects
-
Desktop + TUI — Same server, different frontends
Deployment Model
OpenCode configuration lives in two scopes with layered merging:
~/.config/opencode/ # Global configuration
├── opencode.json # Global provider/agent/tool config
├── tui.json # Theme, keybindings, TUI behavior
├── agents/ # Global custom agents (.md files)
├── commands/ # Global custom slash commands (.md)
├── modes/ # Global custom modes (.md)
├── plugins/ # Global plugins (JS/TS)
├── skills/ # Global skills (SKILL.md)
├── themes/ # Global custom themes (.json)
└── tools/ # Global tool overrides
<repo>/.opencode/ # Project-scoped overrides
├── agents/ # Project-specific agents
├── commands/ # Project-specific commands
├── modes/ # Project-specific modes
├── plugins/ # Project-specific plugins
├── skills/ # Project-specific skills
├── themes/ # Project-specific themes
└── tools/ # Project-specific tool overrides
<repo>/AGENTS.md # Project instructions (like CLAUDE.md)
<repo>/opencode.json # Project config (merges with global)
Configuration Precedence
Later sources override earlier. Configs merge, they do not replace:
| Priority | Source | Scope |
|---|---|---|
1 (lowest) |
Remote config ( |
Organization |
2 |
|
Global |
3 |
|
Session |
4 |
|
Project |
5 |
|
Project |
6 (highest) |
|
Inline override |
Variable Substitution
Config files support dynamic values:
{
"provider": {
"custom": {
"api": {
"apiKey": "{env:CUSTOM_API_KEY}",
"systemPrompt": "{file:~/.config/opencode/system-prompt.md}"
}
}
}
}
-
{env:VARIABLE_NAME}— Inject environment variables -
{file:path/to/file}— Inject file contents (supports~/and absolute paths)
Configuration Hierarchy
| Layer | Purpose | Enforcement | Location |
|---|---|---|---|
opencode.json |
Providers, agents, tools, MCP, permissions |
Hard enforcement (provider routing, tool allow/deny) |
|
AGENTS.md |
Project instructions, conventions, context |
Guidance (injected into system prompt) |
Repo root |
tui.json |
Themes, keybindings, scroll, diff style |
TUI behavior only |
|
Skills |
Reusable instruction sets loaded on demand |
On-demand via skill tool |
|
Agents |
Specialized agent definitions |
Agent dispatch by name |
|
Commands |
Custom slash commands |
User-invoked via |
|
Modes |
Tool/permission presets (Build, Plan, custom) |
Mode-based tool filtering |
|
Plugins |
JS/TS event hooks and custom tools |
Auto-loaded at startup |
|
Themes |
Color schemes and styling |
Visual only |
|
JSON Schema Support
Both config files have published schemas for validation and IDE autocomplete:
| File | Schema URL |
|---|---|
|
|
|
Add to opencode.json:
{
"$schema": "https://opencode.ai/config.json"
}
Feature Comparison: OpenCode vs Claude Code
| Feature | Claude Code | OpenCode |
|---|---|---|
License |
Proprietary (Anthropic) |
MIT (open source) |
Provider lock-in |
Anthropic only |
75+ providers, local models |
Local models |
Not supported |
Ollama, LM Studio, llama.cpp |
Config file |
|
|
Config directory |
|
|
Skills |
|
|
Agents |
Single agent + subagent dispatch |
Multiple primary agents + subagents + custom |
Modes |
None |
Build/Plan/custom modes with Tab toggle |
MCP support |
Local (stdio) |
Local (stdio) + Remote (SSE) + OAuth |
Themes |
None |
11+ built-in + fully custom JSON themes |
Keybindings |
Partial customization |
Full customization with leader key system |
Hooks/Plugins |
Bash scripts (5 events) |
JS/TS plugins (9+ events, can define tools) |
Undo/Redo |
Not available |
Built-in with file snapshots |
Desktop app |
None |
Beta (macOS, Windows, Linux) |
Client/server |
No |
Yes (enables remote control) |
Session sharing |
No |
Yes ( |
Auto-compaction |
Manual compact |
Automatic with configurable behavior |
Output styles |
Dedicated system |
Via AGENTS.md instructions |
Path-specific rules |
Auto-load by file type |
Via modes or AGENTS.md (manual) |
Web search |
WebSearch tool |
websearch (Exa AI) |
Cost |
Anthropic pricing only |
Any provider pricing; local = free |
Stars |
N/A (proprietary) |
137k+ (as of 2026-04) |
Release cadence |
Weekly |
Multiple per day |
Verdict
Neither tool is strictly superior. The choice depends on priorities:
Choose Claude Code when:
-
Working exclusively with Claude models
-
Simplicity matters more than flexibility
-
Path-specific rule auto-loading is critical
-
Anthropic support and model optimization are valued
Choose OpenCode when:
-
Provider diversity is essential (local + cloud, multiple vendors)
-
Cost control matters (free local models for routine work)
-
Full customization is desired (themes, keybindings, plugins)
-
Open-source philosophy is important
-
Need undo/redo and session sharing
Our approach: Run both. Claude Code for Anthropic-heavy work. OpenCode as the primary daily driver with multi-provider flexibility.
Field Notes
FN-001: Custom Theme Crash (2026-04-04)
Severity: TUI crash on startup — fatal TypeError
Incident:
Custom theme catppuccin-domus.json caused OpenCode to crash immediately:
TypeError: Object.entries requires that input parameter not be null or undefined
at resolveTheme (/$bunfs/root/src/index.js:484986:53)
Root Cause:
The custom theme JSON used flat key names (primary, secondary, syntaxKeyword, etc.) that don’t match OpenCode’s internal theme schema. The resolveTheme function expects a specific nested structure that isn’t documented in the public docs. Our flat format resulted in null being passed to Object.entries.
Fix Applied:
Reverted tui.json to built-in catppuccin theme. Custom theme file kept in themes/ for future investigation.
sed -i 's/"catppuccin-domus"/"catppuccin"/' tui.json
Prevention:
-
Test custom themes against the actual schema before committing
-
The built-in
catppuccin(Mocha) is already correct for the ecosystem -
Custom themes need reverse-engineering from OpenCode source — the docs are incomplete on theme structure
-
Always have a fallback:
"theme": "catppuccin"is the safe default
Status: Built-in catppuccin active. Custom theme deferred until schema is documented or reverse-engineered.
FN-002: Ollama Install and First Local Model (2026-04-04)
Decision: Install Ollama and use local inference as the default model before cloud API keys are configured.
Why: No API keys were set up yet in dsec. The RTX 5090 Laptop GPU (24GB VRAM) can run 14B parameter models comfortably. This lets OpenCode work immediately at zero cost.
Implementation:
# Install Ollama (adds systemd service, NVIDIA GPU detected automatically)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull qwen3:14b (9.3 GB)
ollama pull qwen3:14b
Ollama installed itself with:
-
User
ollamaadded tovideogroup (GPU access) -
Current user added to
ollamagroup -
systemd service created and enabled at
/etc/systemd/system/ollama.service
Config changes:
-
Uncommented
ollama-localprovider block inopencode.jsonc -
Model changed from
qwen2.5-coder:14b(placeholder) toqwen3:14b(actually pulled) -
Default model set to
ollama-local/qwen3:14btemporarily
Verification:
systemctl is-active ollama # active
ollama list # qwen3:14b, 9.3 GB
Next steps:
-
Configure dsec API keys for cloud providers
-
Switch default model back to
anthropic/claude-sonnet-4-6after keys are loaded -
Test model quality: can qwen3:14b handle AsciiDoc attribute insertion and worklog creation?
FN-003: Provider Strategy Without API Keys (2026-04-04)
Decision: Default to Ollama local until cloud API keys are configured.
Why: The user has accounts at all 4 cloud providers but hadn’t generated API keys yet. Rather than blocking on key acquisition, default to local inference for immediate usability.
Model routing (current):
| Model | Status |
|---|---|
|
Active (default) |
|
Configured, awaiting |
|
Configured, awaiting |
|
Configured, awaiting |
|
Configured, awaiting |
GitHub Copilot |
Awaiting |
How to apply: After dsec keys are loaded, update opencode.jsonc:
# Preview
sed -n '7p' ~/.config/opencode/opencode.jsonc
# Switch default back to Claude Sonnet
sed -i '7s|"ollama-local/qwen3:14b"|"anthropic/claude-sonnet-4-6"|' \
~/atelier/_projects/personal/dots-quantum/opencode/.config/opencode/opencode.jsonc
# Switch small model back to Haiku
sed -i '8s|"ollama-local/qwen3:14b"|"anthropic/claude-haiku-4-5"|' \
~/atelier/_projects/personal/dots-quantum/opencode/.config/opencode/opencode.jsonc
# Restow
stow -R -t ~ opencode
Roadmap
Phase 1: Foundation (Week 1)
| Task | Description | Status |
|---|---|---|
Install OpenCode |
|
[ ] |
Configure Ollama provider |
Local models with 32k context, GPU-accelerated |
[ ] |
Configure DeepSeek provider |
API key, deepseek-chat and deepseek-reasoner models |
[ ] |
Configure Copilot provider |
Device login, GPT-4.1 access |
[ ] |
Configure OpenAI provider |
API key, GPT-4.1 and O4-mini |
[ ] |
Configure Gemini provider |
API key, Gemini 2.5 Pro and Flash |
[ ] |
Configure Claude provider |
API key, Opus/Sonnet/Haiku |
[ ] |
Set Catppuccin theme |
|
[ ] |
Test all providers |
Verify each provider responds correctly |
[ ] |
Phase 2: AGENTS.md & Permissions (Week 2)
| Task | Description | Status |
|---|---|---|
Create AGENTS.md for domus-captures |
Port key CLAUDE.md sections (AsciiDoc rules, security, conventions) |
[ ] |
Configure global permissions |
Translate 107 allow + 10 deny from Claude Code |
[ ] |
Test skills cross-compatibility |
Verify /deploy, /worklog, /session work from .claude/skills/ |
[ ] |
Set up instruction files |
Global rules in ~/.config/opencode/ |
[ ] |
Configure keybindings |
Match muscle memory from Claude Code in tui.json |
[ ] |
Phase 3: Agents & Commands (Week 3)
| Task | Description | Status |
|---|---|---|
Port adoc-linter agent |
Translate from Claude Code agent format |
[ ] |
Port build-fixer agent |
Translate from Claude Code agent format |
[ ] |
Port worklog-creator agent |
Translate from Claude Code agent format |
[ ] |
Create /audit-adoc command |
Custom command for AsciiDoc linting |
[ ] |
Create /build-check command |
Custom command for Antora build verification |
[ ] |
Create Review mode |
Read-only mode for documentation review |
[ ] |
Phase 4: Plugins (Week 4)
| Task | Description | Status |
|---|---|---|
Build asciidoc-validator plugin |
Port validate-asciidoc-attrs.sh to TypeScript |
[ ] |
Build security-guard plugin |
Port sensitive file staging check |
[ ] |
Build shellcheck-runner plugin |
Port PostToolUse ShellCheck validation |
[ ] |
Build session-logger plugin |
Log tool calls and model switches for cost tracking |
[ ] |
Build cost-tracker plugin |
Estimate token usage and cost per provider |
[ ] |
Phase 5: Optimization (Ongoing)
| Task | Description | Status |
|---|---|---|
Fine-tune model routing |
Document which tasks go to which provider based on real-world results |
[ ] |
Create custom Catppuccin Domus theme |
Match exact colors from Neovim/Hyprland/Antora UI |
[ ] |
Build MCP servers |
GitHub MCP, Antora MCP as needed |
[ ] |
Document workflows |
As real workflows emerge, create wf-*.adoc partials |
[ ] |
Publish comparison data |
Real cost/quality/speed comparisons between providers |
[ ] |
Improvement Proposals
| Priority | Proposal | Rationale | Effort |
|---|---|---|---|
P1 |
Port CLAUDE.md to AGENTS.md with provider-aware sections |
873 lines of battle-tested instructions. Most transfer directly. Provider-specific guidance is new. |
L |
P1 |
Build cost-tracker plugin |
Multi-provider means cost can spiral. Need visibility into spend per provider per session. |
M |
P1 |
Create domus-* AGENTS.md template |
Each domus-* repo needs AGENTS.md. Template ensures consistency across 15 repos. |
S |
P2 |
Auto-model-routing plugin |
Automatically select cheapest adequate model per task complexity. Start with Ollama, escalate to cloud. |
L |
P2 |
Provider health dashboard command |
|
S |
P3 |
OpenCode contribution |
Contribute path-specific rule auto-loading feature upstream (biggest missing feature). |
XL |
Metadata
| Field | Value |
|---|---|
PRJ ID |
PRJ-opencode |
Author |
Evan Rosado |
Created |
2026-04-04 |
Last Updated |
2026-04-04 |
Status |
Initial Setup |
Next Review |
2026-04-18 |
"Open source is not about cost — it’s about control. Every provider, every model, every config is a choice I make, not a choice made for me."