Anything Engine
14-class dispatch spec — why it replaces the 6-tool router.

Fourteen Classes, One Front Door
Mark's pivot, Apr 28: away from "leverage loops vs outcomes" — that mental model is dead. The new dispatch front door is 14 outcome request classifications. Find_investors is class #3 and the laser-focus reference implementation.
Core idea
Every user query goes through a classifier before any tool runs. The classifier returns:
- Class — one of 14 fixed labels (find_investors, find_talent, find_customers, …)
- Result count — how many results we'll attempt to return (sets UI expectations)
- Confidence — does the user really mean this, or do we need to ask back
The system gates dispatch until classification + result count are known. No more "throw the query at a router and hope". The Anything Engine forces a yes/no checkpoint at the front door.
The 14 classes (placeholder list — confirm with Mark)
The exact list lives in Mark's Mintlify. Working list:
- find_investors
- find_talent
- find_customers
- research_person
- research_company
- research_topic
- find_partners
- find_advisors
- find_co-investors
- find_journalists / press
- find_event_attendees
- find_warm_intros
- summarize_meeting
- plan_outcome
Confirm with Mark on Apr 29.
Why this beats the old 6-tool router
The current Robert Lab agent (endpoint 8349) routes to 6 tools, then the tool builds a Cypher query, then the synthesizer drafts. It works, but:
- No yes/no checkpoint — the router commits to a tool before the user confirms intent
- No result-count contract — the UI doesn't know how many cards to render
- Tool boundaries are leaky —
find_investorsandfind_co-investorsoverlap, leading to wrong-tool-picked errors
Anything Engine pushes those decisions one layer up, into a deterministic classifier with a fixed enum.
Find_investors as the reference implementation
This is what we're building first. Once it's solid, the same pattern ports to the other 13 classes.
Pipeline:
[user query]
→ [Anything Engine classifier] → class=find_investors, count=25, confidence=0.92
→ [thesis extractor] → pull thesis from user's deck/profile
→ [AlloyDB ScaNN hybrid query] → WHERE filters + 6-dim vector ORDER BY
→ [Haiku 4.5 WHY pass] → one-paragraph rationale per investor
→ [Crayon stream] → contact_card templates streaming to UI
→ [Zep ingest] → thread.add_messages with the response
What's editable, what's not
- Editable in
prompts/: classifier prompt, thesis extractor prompt, WHY synthesizer prompt, voice rules - Not editable from prompts: the 14-class enum, the AlloyDB schema, the Crayon template registry
Model selection — Apr 29 directive from Mark
"Opus is the wrong tier for the interviewer turn. The interview script logic doesn't need frontier reasoning — it needs reliable tool calls, schema-clean outputs, and conversational pace. Haiku 4.5 wins all three." — Mark Pederson, Apr 29 1:25 AM
Practical implications for the pipeline:
| Stage | Current | Recommended | Why |
|---|---|---|---|
| Classifier (8400) | OpenRouter Llama 3.3 70B | Anthropic Haiku 4.5 | Sub-500ms TTFT vs 1–2s; reliable JSON; 5× cheaper than Opus tier |
| Role/thesis extractor (8402) | OpenRouter Llama 3.3 70B | Anthropic Haiku 4.5 | Trivial extraction — Haiku is the right tier |
| WHY synthesis (8401) | OpenRouter Llama 3.3 70B | Anthropic Haiku 4.5 | Conversational pace matters — sub-500ms feels conversational, 1–2s feels like the model is "thinking" |
| Tool calls (any) | n/a today | Anthropic Haiku 4.5 | Schema-clean tool outputs, fast |
Numbers (per Mark): Haiku 4.5 ≈ 80–120 tps output, sub-500ms TTFT, $1 / $5 per 1M tokens. Opus 4.5/4.6 ≈ 60–80 tps, 1–2s TTFT, $5 / $25 per 1M.
Status: directive logged, swap not yet executed. Not breaking the demo before the call. Open the model swap conversation with Mark to confirm what we keep on OpenRouter (e.g. Fireworks/Together fallback for resiliency) vs what moves to Anthropic direct.
Open questions for Apr 29
- Lock the 14-class list with Mark
- Should the classifier emit a "create_outcome?" hint when the query smells goal-shaped? (Apr 20 directive)
- Recency-of-contact filter — does that live in the classifier or in find_investors?
- Model swap: full swap to Haiku 4.5 (per Apr 29 directive) vs hybrid (Haiku for classify + extract, OpenRouter for WHY where the prompt is heaviest)?
References
- Apr 28 sync notes:
~/.claude/projects/.../memory/april-28-mark-sync.md - Apr 22 LSI dogfood: https://orbiter-status-report.pages.dev/lsi-dogfood
- Apr 16 agent router: https://orbiter-status-report.pages.dev/agent-router