The open-source dashboard for managing LLM providers.
Auto-detect, configure, and optimize your AI stack in one place.
npm i -g ondeckllm
Works with your stack
From zero to optimized in three steps.
npm i -g ondeckllm && ondeckllm
One command. Opens a local dashboard on port 3900.
Finds your existing API keys, Ollama models, and OpenClaw config automatically.
Set batting order, profiles, and fallbacks for every task type.
Everything you need to manage your AI lineup.
Manage all your LLM API keys in one place. One-click validation, balance checks, and status indicators.
Drag-and-drop model priority per task type. Set your starting lineup, pinch hitters, and bullpen.
Budget, Quality First, Local Only, Privacy Mode, Speed Demon. One-click switching.
Reads and writes your OpenClaw config. Atomic writes with automatic rollback on failure.
One-click local model setup. Browse, pull, and configure models with guided starter packs.
Privacy proxy integration. Enable Privacy Mode and all cloud calls route through CloakClaw automatically.
Stop editing JSON configs.
Start managing your AI lineup.
Part of the Canonflip ecosystem.
Direct config sync. Changes in OnDeckLLM reflect instantly in your running OpenClaw agent.
openclaw.com →Privacy proxy for your LLM calls. Strip PII before it hits the cloud. Automatic integration.
cloakclaw.com →