Dashboard
Usage data is recorded each time a request is routed throughmanifest/auto.
The main dashboard shows:
- Total messages — number of LLM calls.
- Total tokens — input, output, and cache read tokens.
- Total cost — calculated from model pricing and token counts.
- Messages over time — LLM calls charted by period.
- Cost over time — spend charted by period.
- Model distribution — which models handled what percentage of requests.
Metrics captured
| Metric | Description |
|---|---|
| Messages | Number of LLM calls |
| Input tokens | Prompt tokens sent |
| Output tokens | Completion tokens received |
| Cache read tokens | Tokens served from cache |
| Cost (USD) | Calculated from model pricing × tokens |
| Duration (ms) | Round-trip latency |
| Model | Which LLM handled the request |
| Routing tier | simple / standard / complex / reasoning |
| Agent name | The OpenClaw agent that sent the request |
How cost is calculated
Manifest has pricing data for 300+ models across Anthropic, OpenAI, Google, DeepSeek, and others.Message log
Every LLM call is recorded. You can browse the log with:- Pagination across all requests.
- Filters for agent, model, and time range.
Data storage
- Cloud
- Local
PostgreSQL hosted at app.manifest.build. Data persists across devices and is accessible from any browser.