DocsAnything Engine
Anything Engine · Spec

Anything Engine

14-class dispatch spec — why it replaces the 6-tool router.

The Anything Engine — Fourteen Classes

Fourteen Classes, One Front Door

Mark's pivot, Apr 28: away from "leverage loops vs outcomes" — that mental model is dead. The new dispatch front door is 14 outcome request classifications. Find_investors is class #3 and the laser-focus reference implementation.

Core idea

Every user query goes through a classifier before any tool runs. The classifier returns:

  1. Class — one of 14 fixed labels (find_investors, find_talent, find_customers, …)
  2. Result count — how many results we'll attempt to return (sets UI expectations)
  3. Confidence — does the user really mean this, or do we need to ask back

The system gates dispatch until classification + result count are known. No more "throw the query at a router and hope". The Anything Engine forces a yes/no checkpoint at the front door.

The 14 classes (placeholder list — confirm with Mark)

The exact list lives in Mark's Mintlify. Working list:

  1. find_investors
  2. find_talent
  3. find_customers
  4. research_person
  5. research_company
  6. research_topic
  7. find_partners
  8. find_advisors
  9. find_co-investors
  10. find_journalists / press
  11. find_event_attendees
  12. find_warm_intros
  13. summarize_meeting
  14. plan_outcome

Confirm with Mark on Apr 29.

Why this beats the old 6-tool router

The current Robert Lab agent (endpoint 8349) routes to 6 tools, then the tool builds a Cypher query, then the synthesizer drafts. It works, but:

  • No yes/no checkpoint — the router commits to a tool before the user confirms intent
  • No result-count contract — the UI doesn't know how many cards to render
  • Tool boundaries are leakyfind_investors and find_co-investors overlap, leading to wrong-tool-picked errors

Anything Engine pushes those decisions one layer up, into a deterministic classifier with a fixed enum.

Find_investors as the reference implementation

This is what we're building first. Once it's solid, the same pattern ports to the other 13 classes.

Pipeline:
[user query]
  → [Anything Engine classifier]  →  class=find_investors, count=25, confidence=0.92
  → [thesis extractor]              →  pull thesis from user's deck/profile
  → [AlloyDB ScaNN hybrid query]    →  WHERE filters + 6-dim vector ORDER BY
  → [Haiku 4.5 WHY pass]            →  one-paragraph rationale per investor
  → [Crayon stream]                 →  contact_card templates streaming to UI
  → [Zep ingest]                    →  thread.add_messages with the response

What's editable, what's not

  • Editable in prompts/: classifier prompt, thesis extractor prompt, WHY synthesizer prompt, voice rules
  • Not editable from prompts: the 14-class enum, the AlloyDB schema, the Crayon template registry

Model selection — Apr 29 directive from Mark

"Opus is the wrong tier for the interviewer turn. The interview script logic doesn't need frontier reasoning — it needs reliable tool calls, schema-clean outputs, and conversational pace. Haiku 4.5 wins all three." — Mark Pederson, Apr 29 1:25 AM

Practical implications for the pipeline:

StageCurrentRecommendedWhy
Classifier (8400)OpenRouter Llama 3.3 70BAnthropic Haiku 4.5Sub-500ms TTFT vs 1–2s; reliable JSON; 5× cheaper than Opus tier
Role/thesis extractor (8402)OpenRouter Llama 3.3 70BAnthropic Haiku 4.5Trivial extraction — Haiku is the right tier
WHY synthesis (8401)OpenRouter Llama 3.3 70BAnthropic Haiku 4.5Conversational pace matters — sub-500ms feels conversational, 1–2s feels like the model is "thinking"
Tool calls (any)n/a todayAnthropic Haiku 4.5Schema-clean tool outputs, fast

Numbers (per Mark): Haiku 4.5 ≈ 80–120 tps output, sub-500ms TTFT, $1 / $5 per 1M tokens. Opus 4.5/4.6 ≈ 60–80 tps, 1–2s TTFT, $5 / $25 per 1M.

Status: directive logged, swap not yet executed. Not breaking the demo before the call. Open the model swap conversation with Mark to confirm what we keep on OpenRouter (e.g. Fireworks/Together fallback for resiliency) vs what moves to Anthropic direct.

Open questions for Apr 29

  • Lock the 14-class list with Mark
  • Should the classifier emit a "create_outcome?" hint when the query smells goal-shaped? (Apr 20 directive)
  • Recency-of-contact filter — does that live in the classifier or in find_investors?
  • Model swap: full swap to Haiku 4.5 (per Apr 29 directive) vs hybrid (Haiku for classify + extract, OpenRouter for WHY where the prompt is heaviest)?

References