Skip to main content

Dashboard

Usage data is recorded each time a request is routed through manifest/auto. The main dashboard shows:
  • Total messages — number of LLM calls.
  • Total tokens — input, output, and cache read tokens.
  • Total cost — calculated from model pricing and token counts.
  • Messages over time — LLM calls charted by period.
  • Cost over time — spend charted by period.
  • Model distribution — which models handled what percentage of requests.

Metrics captured

MetricDescription
MessagesNumber of LLM calls
Input tokensPrompt tokens sent
Output tokensCompletion tokens received
Cache read tokensTokens served from cache
Cost (USD)Calculated from model pricing × tokens
Duration (ms)Round-trip latency
ModelWhich LLM handled the request
Routing tiersimple / standard / complex / reasoning
Agent nameThe OpenClaw agent that sent the request

How cost is calculated

Manifest has pricing data for 300+ models across Anthropic, OpenAI, Google, DeepSeek, and others.
Cost = input_tokens × input_price + output_tokens × output_price
Pricing updates in the background. In local mode, it syncs from OpenRouter.

Message log

Every LLM call is recorded. You can browse the log with:
  • Pagination across all requests.
  • Filters for agent, model, and time range.

Data storage

PostgreSQL hosted at app.manifest.build. Data persists across devices and is accessible from any browser.