Skip to main content

Overview dashboard

The dashboard shows:
  • Total messages — number of LLM calls.
  • Total tokens — input, output, and cache read tokens.
  • Total cost — calculated from model pricing and token counts.
  • Messages over time — chart of LLM calls by period.
  • Cost over time — chart of spend by period.
  • Model distribution — breakdown of which models handled requests.

Metrics captured

MetricDescription
MessagesNumber of LLM calls
Input tokensPrompt tokens sent
Output tokensCompletion tokens received
Cache read tokensTokens served from cache
Cost (USD)Calculated from model pricing × tokens
Duration (ms)Round-trip latency
ModelWhich LLM handled the request
Routing tiersimple / standard / complex / reasoning
Agent nameThe OpenClaw agent that sent the request

How cost is calculated

Manifest maintains a pricing table for 40+ models (Anthropic, OpenAI, Google, DeepSeek, and more).
Cost = input_tokens × input_price + output_tokens × output_price
Pricing is refreshed automatically in the background. In local mode, pricing syncs from OpenRouter.

Message log

Every LLM call is recorded with full metadata. The message log provides:
  • Paginated list of all requests.
  • Filters by agent, model, and time range.

Data storage

PostgreSQL hosted at app.manifest.build. Data persists across devices and is accessible from any browser.