Playing With Prompts
Where the prompts live, how to edit, how model selection works, pipeline diagram.

Mark — this is the doc you asked for. Edit a .md, run one command, test in /chat. No reverse-engineering.
The Anything Engine has prompts at every step — classifier, interview, summary builder, thesis extractor, WHY synthesizer. They all live in one folder of .md files, mirrored 1:1 to Xano endpoint slots. You edit, you sync, you test.
The pipeline at a glance
Two ideas to lock in:
- Interview gates dispatch. No tool runs until the interview prompt returns
ready: true. That's the readiness check you asked for on the Apr 29 call — "be intellectually honest; only ask for what you can't infer". - Per-class prompts. Each of the 14 classes can have its own
interview.md,synthesize.md, etc. Today four classes have interview prompts; the rest fall through tofind_investorsas a default until we add them.
Where the prompts live
prompts/
├── _shared/
│ ├── interview-style.md # JSON contract every interview honors
│ ├── voice-rules.md # banned phrases, tone, journalist trick
│ └── model-selection.md # which model runs which step
├── anything_engine/
│ ├── classify.md # 14-class router (Claude Haiku 4.5)
│ └── registry.json # maps prompt files to Xano endpoint slots
├── find_investors/
│ ├── interview.md # gates dispatch — deck path vs describe path
│ ├── thesis-extract.md # PDF/URL → 6 vector fields
│ ├── synthesize.md # WHY paragraph (Opus, journalist voice)
│ └── voice-rules.md # find_investors-specific overrides
├── find_talent/
│ └── interview.md
├── find_customers/
│ └── interview.md
└── research_person/
└── interview.md
One-line tour
| File | Purpose |
|---|---|
_shared/interview-style.md | The JSON contract {ready, summary, next_question, missing_fields}. Don't break it. |
_shared/voice-rules.md | Banned phrases, tone targets. Apply to every output. |
_shared/model-selection.md | Single source of truth: Haiku for chat, Opus for WHY, Llama 3.3 70B as fallback. |
anything_engine/classify.md | 14-class router prompt. Confidence-gated; flips to ask_back under 0.7. |
anything_engine/registry.json | Maps each .md to its Xano endpoint slot. The sync script reads this. |
find_investors/interview.md | The reference implementation. Branch A (deck) vs Branch B (describe). Stop condition documented. |
find_investors/thesis-extract.md | Pulls the 6 vector inputs from a deck/URL. |
find_investors/synthesize.md | The WHY pass. One paragraph per investor. Journalist voice. |
How to edit a prompt
# 1. Edit any .md file in prompts/<class>/
vim prompts/find_investors/synthesize.md # the WHY voice
# or
vim prompts/find_investors/interview.md # the interview gate
# 2. Sync to Xano
pnpm sync-prompts
# (or: node scripts/sync-prompts.mjs)
# 3. Test
pnpm dev && open http://localhost:3001/chat
# or hit the endpoint directly:
curl -s -X POST 'https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/find-investors' \
-H 'Content-Type: application/json' \
-d '{"query":"Series A medtech investors","count":3}' | jq '.investors[0].why'
The sync script is idempotent — running it twice is safe. It only pushes prompts that actually changed. It walks every endpoint listed in prompts/anything_engine/registry.json whose synced: true, fetches the current XanoScript, surgically swaps the leading literal of each var $foo { value = "..." } block, and PUTs the script back.
You need XANO_META_TOKEN in your env. Get one at app.xano.com → Account → Personal Access Tokens. Drop it in .env (the script reads from process.env so any normal env loader works).
Worked example — change the WHY voice for find_investors
Say Mark wants to soften the "no CTAs" rule for find_investors specifically:
# 1. Open the WHY synth prompt for find_investors
vim prompts/find_investors/synthesize.md
# Edit the inline rule, e.g. swap "do NOT ask for a meeting" for
# "lead with the strongest signal; do not propose meetings".
# 2. Sync
pnpm sync-prompts
# Output:
# [sync-prompts] === endpoint 8401 (anything-engine/find-investors) ===
# [sync-prompts] prompts/find_investors/synthesize.md → $synth_prompt (changed)
# [sync-prompts] pushing 1 change(s) to 8401...
# [sync-prompts] done — 1 change(s) across 1 endpoint(s).
# 3. Test
curl -s -X POST 'https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/find-investors' \
-H 'Content-Type: application/json' \
-d '{"query":"Series A medtech investors","count":2}' | jq '.investors[].why'
No Xano dashboard. No XanoScript. Just the .md and the command.
How to add a new condition for a class
Want to add a "fundraising stage check" branch to find_investors? Two paths:
Edit the existing interview.md. Add a section with the new branch logic. Re-run sync. Test. (Easiest, fewest moving parts.)
Add a new prompt file. Create prompts/find_investors/stage-check.md. Add a row in prompts/anything_engine/registry.json under the relevant endpoint, mapping the file to a var_name_in_xano like $stage_check_prompt. Open the Xano endpoint (id 8411 for interview) in the dashboard, add a var $stage_check_prompt { value = "..." } block, and reference it in the system prompt assembly. Re-run sync. Test.
How to add a new class entirely
Say you want find_journalists to have its own interview prompt. Three steps:
mkdir prompts/find_journalists && vim prompts/find_journalists/interview.md— write the prompt following thefind_investorsshape.- Add a row to
prompts/anything_engine/registry.jsonunderanything-engine/interview.prompt_files. - Open the Xano interview endpoint (id 8411), add a new
var $prompt_find_journalistsblock, and add aconditional { if ($input.class == "find_journalists") { var.update $class_prompt { value = $prompt_find_journalists } } }.
Then pnpm sync-prompts and you're live.
How model selection works
prompts/_shared/model-selection.md is the single source of truth. Edit it to retarget which model runs which step. Today:
| Step | Model | Why |
|---|---|---|
| classify | claude-haiku-4-5 | fast + JSON-disciplined |
| interview | claude-haiku-4-5 | conversational, low-latency |
| build-summary | claude-haiku-4-5 | one-shot, deterministic |
| thesis-extract | claude-haiku-4-5 | structured JSON from deck |
| synthesize | claude-opus-4-1 | the WHY paragraph — quality matters |
Fallback chain (resilience): meta-llama/llama-3.3-70b-instruct via OpenRouter (Fireworks → Together preferred), then gpt-4.1-mini. Classifier and interviewer never block dispatch on total fallback failure.
Note: today the sync script only pushes prompt content. Model swaps still happen via the Xano dashboard. Phase 2 — sync the
model:field too.
The Xano endpoints behind these prompts
| Endpoint | Xano API ID | Driven by | Synced? |
|---|---|---|---|
POST /anything-engine/classify | 8400 | prompts/anything_engine/classify.md | documented |
POST /anything-engine/interview | 8411 | prompts/<class>/interview.md (4 classes) | YES |
POST /anything-engine/build-summary | 8410 | inline (no prompt file yet) | — |
POST /anything-engine/dispatch | 8399 | inline router | — |
POST /anything-engine/find-investors | 8401 | find_investors/synthesize.md ($synth_prompt) | YES |
POST /anything-engine/find-talent | 8402 | find_talent/synthesize.md ($synth_prompt) | YES |
POST /anything-engine/find-customers | 8406 | find_customers/synthesize.md ($synth_prompt) | YES |
POST /anything-engine/research-person | 8407 | research_person/synthesize.md ($synth_prompt) | YES |
All in workspace 3, API group 1270 (UgP1h6uR).
What /chat does end-to-end
- User types: "I want to find investors"
/api/classify→{class: "find_investors", confidence: 0.95}/api/interview→{ready: false, next_question: "deck or describe?", summary: "find me investors"}- Chat shows the next question. User replies.
/api/interviewruns again with the full transcript.- Loop until
ready: true— typically 2-4 turns. - Summary panel on the right is now editable + dispatchable.
- User reviews the summary, edits if needed, clicks Dispatch.
/api/find-investorsstreams Crayon cards back.
The interview is where the "intellectually honest" rule lives. Every prompt enforces: stop the moment you have enough; never ask twice; never pad.
What's still inline (Phase 2 work)
These prompts are inside Xano endpoints today, not in .md files. Add them to prompts/anything_engine/registry.json (with synced: true) as you externalize each.
anything-engine/build-summary— the summary refiner (8410)anything-engine/dispatch— the router (8399, mostly control flow)anything-engine/classify— classifier (8400, documented but not auto-pushed yet)find_investors/thesis-extract.md— the deck → 6-vector extractor; documented in the registry assynced: falseuntil the endpoint inlines it as a single var
What's already externalized (Apr 29)
- All four interview prompts (8411 /
prompts/<class>/interview.md) - All four WHY-synthesis prompts:
- 8401
$synth_prompt←prompts/find_investors/synthesize.md - 8402
$synth_prompt←prompts/find_talent/synthesize.md - 8406
$synth_prompt←prompts/find_customers/synthesize.md - 8407
$synth_prompt←prompts/research_person/synthesize.md(single-quoted in XanoScript because the body contains escaped double quotes — the sync script handles both quote styles)
- 8401
Quick troubleshooting
sync-prompts says "var not found in endpoint script". Means the endpoint structure changed (someone edited via the Xano dashboard). Open the endpoint, confirm the var $<varname> { value = "..." } (or value = '...') block is still there. The script matches that exact pattern. The script preserves anything chained off the literal — |concat:$input.query etc. — so concat-tail patterns survive sync.
The synth prompt has a |concat:$input.query tail. Will sync clobber that? No. The script only swaps the leading string literal between the quotes. Everything after the closing quote — newlines, |concat: chains, the closing } — is preserved verbatim.
My new prompt class isn't in the synced list. Add a row to prompts/anything_engine/registry.json under the relevant endpoint with synced: true, the right var_name_in_xano, and quote: "single" if the XanoScript var is single-quoted (only research_person is, due to embedded " in the prompt body).
The interview keeps asking the same question. The model lost track. Check prompts/_shared/interview-style.md — the "stop-asking heuristic" section. Tighten the rule, sync, retest.
Summary panel keeps overwriting your edits. It shouldn't — once you type into the box, summaryDirtyRef flips and we stop auto-updating. If it's still happening, check runInterview in src/app/chat/page.tsx.
JSON parse failures from the interview endpoint. The model returned prose. Tighten _shared/interview-style.md's "Output discipline" section, sync, retest.
Where to look for code
- Frontend chat:
src/app/chat/page.tsx(state machine + summary panel) - BFFs:
src/app/api/{classify,interview,build-summary,find-investors}/route.ts - Sync script:
scripts/sync-prompts.mjs - Registry:
prompts/anything_engine/registry.json - Xano endpoint XanoScript: open in app.xano.com, workspace 3, API group 1270