manifest — Smart LLM Routing for OpenClaw. Cut Costs up to 70%

Smart LLM Routing for OpenClaw. Cut Costs up to 70% 该项目在 GitHub 上获得了 3,536 个 Star,是 OpenClaw 生态中的重要项目。

🦞 mnfst/manifest

Smart LLM Routing for OpenClaw. Cut Costs up to 70%

3,536 Stars 🍴 177 Forks 💻 TypeScript 📄 MIT License
🔗 在 GitHub 上查看项目

Manifest

🦞 Take control of your

OpenClaw costs

!manifest-gh

GitHub stars

 

npm version

 

npm downloads

 

CI status

 

Codecov

 

license

 

Discord

What do you get?

  • 🔀 Routes every request to the right model — and cuts costs up to 70%
  • 📊 Track your expenses — real-time dashboard that shows tokens and costs per model
  • 🔔 Set limits — set up alerts (soft or hard) if your consumption exceeds a certain volume

Why Manifest

OpenClaw sends all your requests to the same model, which is not cost-effective since you summon big models for tiny tasks. Manifest solves it by redirecting queries to the most cost-effective model.

Manifest is an OpenClaw plugin that intercepts your query, passes it through a 23-dimension scoring algorithm in <2ms and sends it to the most suitable model.

Unlike almost all alternatives, everything stays on your machine. No suspicious installer, no black box, no third party, no crypto.

Quick Start

Cloud vs Local

Manifest is available in cloud and local versions. While both versions install the same OpenClaw Plugin, the local version stores the telemetry data on your computer and the cloud version uses our secure platform.

Use cloud if

  • You want a quick install
  • You want to access the dashboard from different devices
  • You want to connect multiple agents

Use local if

  • You don't want the telemetry data to move from your computer
  • You don’t need multi-device access
  • You don't want to subscribe to a cloud service

If you don't know which version to choose, start with the cloud version.

Cloud (default)


openclaw plugins install manifest
openclaw config set plugins.entries.manifest.config.apiKey "mnfst_YOUR_KEY"
openclaw gateway restart

Sign up at app.manifest.build to get your API key.

Local


openclaw plugins install manifest
openclaw config set plugins.entries.manifest.config.mode local
openclaw gateway restart

Dashboard opens at http://127.0.0.1:2099. Telemetry from your agents flows in automatically.

To use tailsacle to proxy it to your network (needs Tailscale installed in both devices).


tailscale serve --bg 2099

Features

  • LLM Router — scores each query and calls the most suitable model
  • Real-time dashboard — tokens, costs, messages, and model usage at a glance
  • No coding required — Simple install as OpenClaw plugin
  • OTLP-native — standard OpenTelemetry ingestion (traces, metrics, logs)

Privacy by architecture

In local mode, your data stays on your machine. All agent messages, token counts, costs, and telemetry are stored locally. In cloud mode, only OpenTelemetry metadata (model, tokens, latency) is sent — message content is never collected.

In cloud mode, the blind proxy physically cannot read your prompts. This is fundamentally different from services saying "trust us."

The only thing Manifest collects is anonymous product analytics (hashed machine ID, OS platform, package version, event names) to help improve the project. No personally identifiable information or agent data is included.

Opting out:


MANIFEST_TELEMETRY_OPTOUT=1

Or add "telemetryOptOut": true to ~/.openclaw/manifest/config.json.

Manifest vs OpenRouter

| | Manifest | OpenRouter |

| ------------ | ---------------------------------------------------------- | ------------------------------------------------------------- |

| Architecture | Runs locally — data stays on your machine | Cloud proxy — all traffic routes through their servers |

| Cost | Free | 5% fee on every API call |

| Source code | MIT licensed, fully open | Proprietary |

| Data privacy | 100% local routing and logging | Your prompts and responses pass through a third party |

| Transparency | Open scoring algorithm — see exactly why a model is chosen | Black box routing, no visibility into how models are selected |

Supported Providers

Manifest supports 300+ models across all major LLM providers. Every provider supports smart routing, real-time cost tracking, and OTLP telemetry.

| Provider | Models |

|----------|--------|

| OpenAI | gpt-5.3, gpt-4.1, o3, o4-mini + 54 more |

| Anthropic | claude-opus-4-6, claude-sonnet-4.5, claude-haiku-4.5 + 14 more |

| Google Gemini | gemini-2.5-pro, gemini-2.5-flash, gemini-3-pro + 19 more |

| DeepSeek | deepseek-v3, deepseek-r1 + 11 more |

| xAI | grok-4, grok-3, grok-3-mini + 8 more |

| Mistral AI | mistral-large, codestral, devstral + 26 more |

| Qwen (Alibaba) | qwen3-235b, qwen3-coder, qwq-32b + 42 more |

| MiniMax | minimax-m2.5, minimax-m1, minimax-m2 + 5 more |

| Kimi (Moonshot) | kimi-k2, kimi-k2.5 + 3 more |

| Amazon Nova | nova-pro, nova-lite, nova-micro + 5 more |

| Z.ai (Zhipu) | glm-5, glm-4.7, glm-4.5 + 5 more |

| OpenRouter | 300+ models from all providers |

| Ollama | Run any model locally (Llama, Gemma, Mistral, …) |

Contributing

Manifest is open source under the MIT license. See CONTRIBUTING.md for the development setup, architecture notes, and workflow. Join the conversation on Discord.

> Want a hosted version instead? Check out app.manifest.build

Quick Links

License

MIT