Documentation Index
Fetch the complete documentation index at: https://manifest.build/docs/llms.txt
Use this file to discover all available pages before exploring further.
What is routing?
Instead of sending every request to the same model, use Manifest to route your queries to different models and providers.The 4 routing types
Default
Your default model. Switch easily using Manifest UI.
Complexity
Redirect queries on-the-fly to tiers based on their complexity.
Task-specific
Detect specific tasks queries on-the-fly (coding, trading, image gen…).
Custom
Isolate requests and assign them to tiers based on their HTTP headers.
Default
Every agent has a default model — the one Manifest falls back to when no other routing rule applies. You set it once in the dashboard Routing page and can swap it at any time without touching your code. All requests that don’t match a complexity tier, specificity, or custom header rule are sent here.Complexity
Manifest scores each incoming prompt across 23 dimensions and assigns it to one of four tiers: simple, standard, complex, or reasoning. Each tier maps to a different model in the dashboard, so cheap queries go to cheap models and hard queries go to the best ones automatically. Scoring runs in under 2 ms with no external calls.How scoring works
23 dimensions grouped into three categories. The same scoring pipeline feeds both the complexity tier and the specificity detector. Keyword-based (14) — Scans the prompt for patterns like “prove”, “write function”, “what is”, etc. Structural (5) — Analyzes token count, nesting depth, code-to-prose ratio, conditional logic, and constraint density. Contextual (4) — Considers expected output length, repetition requests, tool count, and conversation depth. Each dimension has a weight. The weighted sum maps to a tier via threshold boundaries. A confidence score (0–1) indicates how clearly the request fits its tier.Session momentum
Manifest remembers the last 5 tier assignments (30-minute TTL). Short follow-up messages (“yes”, “do it”) inherit momentum from the conversation, so they don’t drop to a cheaper tier unnecessarily.Tier overrides
Some signals force a minimum tier regardless of the score:| Signal | Minimum tier |
|---|---|
| Tools detected | standard |
| Large context (>50k tokens) | complex |
| Formal logic keywords | reasoning |
Task-specific
On top of complexity, Manifest detects what kind of task the request is about across nine categories (coding, web browsing, data analysis, image generation, video generation, social media, email, calendar, trading). When a category is detected, the request is routed to the model you’ve pinned for that category, regardless of the complexity tier. You can toggle each category on or off per agent in the dashboard.Available tiers
| Category | Covers |
|---|---|
| Coding | Write, debug, and refactor code |
| Web browsing | Navigate pages, search, and extract content |
| Data analysis | Crunch numbers, run stats, build charts |
| Image generation | Create and edit images, logos, visuals |
| Video generation | Produce clips, animations, and edits |
| Social media | Draft posts, plan content, track engagement |
| Compose, reply, and manage your inbox | |
| Calendar | Book meetings, check availability, reschedule |
| Trading | Analyze markets, place trades, track positions |
How detection works
Two signals feed the detector:- Keyword dimensions. The same trie scan that feeds the complexity score also counts matches in category-specific dimensions (e.g.
codeGenerationandtechnicalTermscount toward Coding,webBrowsingtoward Web browsing,emailManagementtoward Email). - Tool names. When a request includes tool definitions, names with known prefixes boost the matching category —
browser_*/playwright_*→ Web browsing,gmail_*/outlook_*→ Email,gcal_*/calendly_*→ Calendar, and so on.
Custom
Send anx-manifest-tier or x-manifest-specificity HTTP header from your client to force a specific tier or category on any request. Manifest uses the header value directly (confidence 1.0) and skips automatic scoring. This is useful for isolating agent subtasks, A/B testing models, or integrating with orchestration frameworks that classify requests themselves. See the full request headers reference for accepted values.