Local LLM
Project Summary
| Field | Value |
|---|---|
PRJ ID |
PRJ-SPOKE-014 |
Owner |
Evan Rosado |
Priority |
P2 (Medium) |
Status |
Maintenance |
Repository |
|
Antora Component |
N/A (non-Antora) |
Antora Title |
N/A |
Category |
Development |
2026 Commits |
3 |
Site URL |
N/A (local service) |
Purpose
The ollama-local project provides Docker Compose configurations for running Ollama (local LLM inference) on the Domus workstation. It includes GPU-accelerated and CPU-only compose files, start/stop/status scripts, and API usage examples.
This supports the local model fine-tuning initiative and provides offline AI capability for code assistance and document generation when cloud APIs are unavailable.
Scope
In Scope
-
Docker Compose for Ollama (GPU and CPU variants)
-
Service lifecycle scripts (start, stop, status)
-
Ollama REST API usage examples
-
Model management (pull, run, list)
Out of Scope
-
Fine-tuning workflows (covered by local-model-fine-tuning project)
-
AI-powered CLI tools (covered by Kora project)
-
Production deployment (this is a local development service)
Status
| Indicator | Detail |
|---|---|
Activity Level |
Maintenance — 3 commits, functional and stable |
Maturity |
Stable — Docker Compose and scripts working |
Last Activity |
2026 |
Key Milestone |
GPU-accelerated Docker Compose with lifecycle scripts |
Deployment Status |
Running locally on workstation, RTX 5090 GPU |
Metadata
| Field | Value |
|---|---|
PRJ ID |
PRJ-SPOKE-014 |
Author |
Evan Rosado |
Date Created |
2026-03-30 |
Last Updated |
2026-03-30 |
Status |
Maintenance |
Next Review |
2026-06-30 |