DocsPlaying With Prompts
Anything Engine · Iteration

Playing With Prompts

Where the prompts live, how to edit, how model selection works, pipeline diagram.

Playing With Prompts hero

Mark — this is the doc you asked for. Edit a .md, run one command, test in /chat. No reverse-engineering.

The Anything Engine has prompts at every step — classifier, interview, summary builder, thesis extractor, WHY synthesizer. They all live in one folder of .md files, mirrored 1:1 to Xano endpoint slots. You edit, you sync, you test.

The pipeline at a glance

Two ideas to lock in:

  1. Interview gates dispatch. No tool runs until the interview prompt returns ready: true. That's the readiness check you asked for on the Apr 29 call — "be intellectually honest; only ask for what you can't infer".
  2. Per-class prompts. Each of the 14 classes can have its own interview.md, synthesize.md, etc. Today four classes have interview prompts; the rest fall through to find_investors as a default until we add them.

Where the prompts live

prompts/
├── _shared/
│   ├── interview-style.md       # JSON contract every interview honors
│   ├── voice-rules.md           # banned phrases, tone, journalist trick
│   └── model-selection.md       # which model runs which step
├── anything_engine/
│   ├── classify.md              # 14-class router (Claude Haiku 4.5)
│   └── registry.json            # maps prompt files to Xano endpoint slots
├── find_investors/
│   ├── interview.md             # gates dispatch — deck path vs describe path
│   ├── thesis-extract.md        # PDF/URL → 6 vector fields
│   ├── synthesize.md            # WHY paragraph (Opus, journalist voice)
│   └── voice-rules.md           # find_investors-specific overrides
├── find_talent/
│   └── interview.md
├── find_customers/
│   └── interview.md
└── research_person/
    └── interview.md

One-line tour

FilePurpose
_shared/interview-style.mdThe JSON contract {ready, summary, next_question, missing_fields}. Don't break it.
_shared/voice-rules.mdBanned phrases, tone targets. Apply to every output.
_shared/model-selection.mdSingle source of truth: Haiku for chat, Opus for WHY, Llama 3.3 70B as fallback.
anything_engine/classify.md14-class router prompt. Confidence-gated; flips to ask_back under 0.7.
anything_engine/registry.jsonMaps each .md to its Xano endpoint slot. The sync script reads this.
find_investors/interview.mdThe reference implementation. Branch A (deck) vs Branch B (describe). Stop condition documented.
find_investors/thesis-extract.mdPulls the 6 vector inputs from a deck/URL.
find_investors/synthesize.mdThe WHY pass. One paragraph per investor. Journalist voice.

How to edit a prompt

# 1. Edit any .md file in prompts/<class>/
vim prompts/find_investors/synthesize.md   # the WHY voice
# or
vim prompts/find_investors/interview.md    # the interview gate

# 2. Sync to Xano
pnpm sync-prompts
# (or: node scripts/sync-prompts.mjs)

# 3. Test
pnpm dev && open http://localhost:3001/chat
# or hit the endpoint directly:
curl -s -X POST 'https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/find-investors' \
  -H 'Content-Type: application/json' \
  -d '{"query":"Series A medtech investors","count":3}' | jq '.investors[0].why'

The sync script is idempotent — running it twice is safe. It only pushes prompts that actually changed. It walks every endpoint listed in prompts/anything_engine/registry.json whose synced: true, fetches the current XanoScript, surgically swaps the leading literal of each var $foo { value = "..." } block, and PUTs the script back.

You need XANO_META_TOKEN in your env. Get one at app.xano.com → Account → Personal Access Tokens. Drop it in .env (the script reads from process.env so any normal env loader works).

Worked example — change the WHY voice for find_investors

Say Mark wants to soften the "no CTAs" rule for find_investors specifically:

# 1. Open the WHY synth prompt for find_investors
vim prompts/find_investors/synthesize.md
# Edit the inline rule, e.g. swap "do NOT ask for a meeting" for
# "lead with the strongest signal; do not propose meetings".

# 2. Sync
pnpm sync-prompts
# Output:
#   [sync-prompts] === endpoint 8401 (anything-engine/find-investors) ===
#   [sync-prompts]   prompts/find_investors/synthesize.md → $synth_prompt (changed)
#   [sync-prompts]   pushing 1 change(s) to 8401...
#   [sync-prompts] done — 1 change(s) across 1 endpoint(s).

# 3. Test
curl -s -X POST 'https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/find-investors' \
  -H 'Content-Type: application/json' \
  -d '{"query":"Series A medtech investors","count":2}' | jq '.investors[].why'

No Xano dashboard. No XanoScript. Just the .md and the command.

How to add a new condition for a class

Want to add a "fundraising stage check" branch to find_investors? Two paths:

Edit the existing interview.md. Add a section with the new branch logic. Re-run sync. Test. (Easiest, fewest moving parts.)

Add a new prompt file. Create prompts/find_investors/stage-check.md. Add a row in prompts/anything_engine/registry.json under the relevant endpoint, mapping the file to a var_name_in_xano like $stage_check_prompt. Open the Xano endpoint (id 8411 for interview) in the dashboard, add a var $stage_check_prompt { value = "..." } block, and reference it in the system prompt assembly. Re-run sync. Test.

How to add a new class entirely

Say you want find_journalists to have its own interview prompt. Three steps:

  1. mkdir prompts/find_journalists && vim prompts/find_journalists/interview.md — write the prompt following the find_investors shape.
  2. Add a row to prompts/anything_engine/registry.json under anything-engine/interview.prompt_files.
  3. Open the Xano interview endpoint (id 8411), add a new var $prompt_find_journalists block, and add a conditional { if ($input.class == "find_journalists") { var.update $class_prompt { value = $prompt_find_journalists } } }.

Then pnpm sync-prompts and you're live.

How model selection works

prompts/_shared/model-selection.md is the single source of truth. Edit it to retarget which model runs which step. Today:

StepModelWhy
classifyclaude-haiku-4-5fast + JSON-disciplined
interviewclaude-haiku-4-5conversational, low-latency
build-summaryclaude-haiku-4-5one-shot, deterministic
thesis-extractclaude-haiku-4-5structured JSON from deck
synthesizeclaude-opus-4-1the WHY paragraph — quality matters

Fallback chain (resilience): meta-llama/llama-3.3-70b-instruct via OpenRouter (Fireworks → Together preferred), then gpt-4.1-mini. Classifier and interviewer never block dispatch on total fallback failure.

Note: today the sync script only pushes prompt content. Model swaps still happen via the Xano dashboard. Phase 2 — sync the model: field too.

The Xano endpoints behind these prompts

EndpointXano API IDDriven bySynced?
POST /anything-engine/classify8400prompts/anything_engine/classify.mddocumented
POST /anything-engine/interview8411prompts/<class>/interview.md (4 classes)YES
POST /anything-engine/build-summary8410inline (no prompt file yet)
POST /anything-engine/dispatch8399inline router
POST /anything-engine/find-investors8401find_investors/synthesize.md ($synth_prompt)YES
POST /anything-engine/find-talent8402find_talent/synthesize.md ($synth_prompt)YES
POST /anything-engine/find-customers8406find_customers/synthesize.md ($synth_prompt)YES
POST /anything-engine/research-person8407research_person/synthesize.md ($synth_prompt)YES

All in workspace 3, API group 1270 (UgP1h6uR).

What /chat does end-to-end

  1. User types: "I want to find investors"
  2. /api/classify{class: "find_investors", confidence: 0.95}
  3. /api/interview{ready: false, next_question: "deck or describe?", summary: "find me investors"}
  4. Chat shows the next question. User replies.
  5. /api/interview runs again with the full transcript.
  6. Loop until ready: true — typically 2-4 turns.
  7. Summary panel on the right is now editable + dispatchable.
  8. User reviews the summary, edits if needed, clicks Dispatch.
  9. /api/find-investors streams Crayon cards back.

The interview is where the "intellectually honest" rule lives. Every prompt enforces: stop the moment you have enough; never ask twice; never pad.

What's still inline (Phase 2 work)

These prompts are inside Xano endpoints today, not in .md files. Add them to prompts/anything_engine/registry.json (with synced: true) as you externalize each.

  • anything-engine/build-summary — the summary refiner (8410)
  • anything-engine/dispatch — the router (8399, mostly control flow)
  • anything-engine/classify — classifier (8400, documented but not auto-pushed yet)
  • find_investors/thesis-extract.md — the deck → 6-vector extractor; documented in the registry as synced: false until the endpoint inlines it as a single var

What's already externalized (Apr 29)

  • All four interview prompts (8411 / prompts/<class>/interview.md)
  • All four WHY-synthesis prompts:
    • 8401 $synth_promptprompts/find_investors/synthesize.md
    • 8402 $synth_promptprompts/find_talent/synthesize.md
    • 8406 $synth_promptprompts/find_customers/synthesize.md
    • 8407 $synth_promptprompts/research_person/synthesize.md (single-quoted in XanoScript because the body contains escaped double quotes — the sync script handles both quote styles)

Quick troubleshooting

sync-prompts says "var not found in endpoint script". Means the endpoint structure changed (someone edited via the Xano dashboard). Open the endpoint, confirm the var $<varname> { value = "..." } (or value = '...') block is still there. The script matches that exact pattern. The script preserves anything chained off the literal — |concat:$input.query etc. — so concat-tail patterns survive sync.

The synth prompt has a |concat:$input.query tail. Will sync clobber that? No. The script only swaps the leading string literal between the quotes. Everything after the closing quote — newlines, |concat: chains, the closing } — is preserved verbatim.

My new prompt class isn't in the synced list. Add a row to prompts/anything_engine/registry.json under the relevant endpoint with synced: true, the right var_name_in_xano, and quote: "single" if the XanoScript var is single-quoted (only research_person is, due to embedded " in the prompt body).

The interview keeps asking the same question. The model lost track. Check prompts/_shared/interview-style.md — the "stop-asking heuristic" section. Tighten the rule, sync, retest.

Summary panel keeps overwriting your edits. It shouldn't — once you type into the box, summaryDirtyRef flips and we stop auto-updating. If it's still happening, check runInterview in src/app/chat/page.tsx.

JSON parse failures from the interview endpoint. The model returned prose. Tighten _shared/interview-style.md's "Output discipline" section, sync, retest.

Where to look for code

  • Frontend chat: src/app/chat/page.tsx (state machine + summary panel)
  • BFFs: src/app/api/{classify,interview,build-summary,find-investors}/route.ts
  • Sync script: scripts/sync-prompts.mjs
  • Registry: prompts/anything_engine/registry.json
  • Xano endpoint XanoScript: open in app.xano.com, workspace 3, API group 1270