Coming May 1, 2026

Code with AI for Free.
Pay Less When You Need Cloud.

One tool. Every model. Local-first orchestration.
Persistent memory. Structural governance. Offline by default.

$ brew install convergent-systems-co/tap/olympus

How It's Different

Not another AI wrapper. A complete orchestration layer.

Claude Code GitHub Copilot Olympus
Cost to start Pay per token $19/mo subscription Free with local models
Models Claude only GPT + Claude Ollama + Claude + Copilot + any OpenAI-compatible
Offline No No Full orchestration via Ollama
Memory Session only None Persistent, cross-machine, team-shared
Governance None None Structural review panels, signed emissions

Architecture

Three layers. Complete autonomy.

Kernel
ZeusOrchestrator
AthenaReasoning
MnemosyneMemory
Iris
RouterModel selection
FailoverAuto-recovery
Cost GateDrachma metering
Pantheon
OllamaLocal models
ClaudeAnthropic API
CopilotGitHub
OpenAIGPT models

Pricing

Start free with local models. Scale when you need cloud.

Monthly Annual Save 10%
Free
$0
  • All Pantheon modules
  • Local LLMs unlimited (Ollama)
  • Cloud Ollama when local hardware can't keep up
  • Cloud access via Drachma (pay-as-you-go)
  • Local persistent memory
  • Basic governance panels
Get Started
Team
$29 /seat/mo
  • Everything in Pro
  • Shared team memory
  • Team governance enforcement
  • Pooled Drachma budget
  • Premium models (GPT-5.4 Pro, o3)
  • Priority support
Enterprise
Custom
  • Everything in Team
  • Org-wide cascading memory
  • Org-wide governance cascade
  • Custom Drachma volume pricing
  • All models including GPT-5.4 Pro
  • Dedicated support + SLA
Contact Sales

What is Drachma?

Drachma is Olympus's cloud token currency. Buy once, use across all providers — Claude, GPT-5.4, Groq, and more. Olympus routes your requests to the optimal model automatically. You never manage API keys or worry about provider pricing.

1 Δ ~7,700 tokens with GPT-4.1-nano (cheapest)
1 Δ ~256 tokens with GPT-5.4 (flagship)
1 Δ ~14,300 tokens with Groq llama-3.1-8b (fastest)

Can't run local models?

If your machine doesn't have the GPU or RAM to run Ollama models locally, Olympus offers Cloud-Based Ollama — the same open-weight models (Llama, Qwen, Mistral) hosted for you, billed at standard Drachma rates. Same models, same quality, no hardware requirements.

Wallet

Drachma Balance
--
Recent Transactions
Sign in to view transactions

Sign in to view your wallet and purchase Drachma tokens.

Constitutional Governance

Every AI-generated code change passes through independent review panels before you see it. Not prompts — structural governance built into the runtime.

  • Security review
  • Architecture validation
  • Threat modeling
  • Cost analysis
  • License compliance
  • Performance audit

All automated. All auditable. Signed emission logs for every action. Customize or extend with your own policies.

Security
Architecture
Threat Model
$ Cost
Performance
Compliance
+11 more panels

Constitutional AI Metrics

Real-time critique verdicts and outcome scores from the CritiquePipeline ↦ ReviseDecider ↦ ScoreCollector loop.

🔒

Constitutional metrics are available to authenticated Olympus accounts.