Prompts shouldn’t be a
black box
Stop sending your AI the entire manual
dot-prompt assembles
step-specific prompts
so you always see exactly what the model gets
Step 3 doesn’t need steps 1–2. Now it won’t get them
if @user_is_new is true doWelcome! Let's start withthe basics of how this works.elseWelcome back! Readyto continue?end @user_is_new
Welcome! Let's start with
the basics of how this works.Used by teams building with
Your prompts are invisible.
You write instructions for your AI. Variables get filled in. Conditions get evaluated. The final text gets sent off. And somewhere in that process — you lose sight of exactly what your AI actually received.
When something goes wrong — wrong tone, wrong format, an answer that makes no sense — you're left guessing. Was it the wording? The variable that got injected? The condition that fired unexpectedly?
Right now, the assembled prompt is a black box. You write it. You send it. You hope.
"When something goes wrong, you're left guessing"
One language. Every voice.
Debug with confidence
You know something's wrong. Now you can see exactly what it is. Inspect the compiled prompt, change a variable, watch the output update. No more print statements, no more guessing.
Prompts your whole team can read
Prompts buried in code are invisible to anyone who isn't a developer. dot-prompt files are plain text that anyone can open, read, and understand. Your PM can review what the AI is being told. Your designer can suggest changes. Everyone stays in the loop.
Write complex prompts without writing code
You know your domain. You know what the AI should say and when. dot-prompt gives you a structured way to express that — without needing to touch application code. Write the logic in plain language. See it work.
Understand what your product is doing
Your AI feature is live. Something's off. You need to know what your users are actually experiencing. dot-prompt shows you the exact prompt behind every response — not the template, the real thing.
Write once.
See everything.
Keep your prompts in readable files
Write your prompts in .prompt files — structured, readable, and separate from your code. Add branching logic, variables, and variations. Your whole team can open them.
if @user_is_new is true doWelcome! Let's start with the basics.elseWelcome back. Ready to continue?end @user_is_new
Watch your prompt compile in real time
Open the dot-prompt viewer. Select your variables. Watch the exact output appear — the precise text your AI will receive. Change a variable. Watch it update instantly.
Welcome! Let's start with the basics of how this works.
Welcome back. Ready to continue?
Know exactly why your AI responds the way it does
When something is wrong you can see it. When something is right you can see why. No more guessing. No more hoping. Just the assembled prompt, right there, in front of you.
Visibility Achieved
You now have a source of truth. Your prompts are no longer strings trapped in code. They are managed, verified products.
Everything you need to
see inside your prompts.
Real-time preview
Select your variables and see the exact compiled output instantly. Change a value — watch the prompt update. The viewer shows you precisely what your AI will receive, token by token.
Branching that resolves before the call
Write conditional logic in your prompt files. dot-prompt resolves it before anything reaches your AI. Your AI receives one clean, flat instruction — not a maze of conditionals it has to interpret.
Fragment composition
Build reusable prompt pieces and compose them together. Load skill definitions, context blocks, and examples from organised folders. Static fragments are cached and never fetched twice. Dynamic fragments are fetched fresh on every call.
fragments:{skill_guide}: static from: skillsmatch: @topic ← loads the right file automatically{examples}: static from: examplesmatch: @topiclimit: 3 ← composite three examples{{user_history}}: dynamic ← always fresh
Input and output contracts
Declare what your prompt expects and what it returns. dot-prompt validates both sides. Breaking changes are detected automatically. Your AI feature stays predictable as your prompts evolve.
Version control that understands prompts
dot-prompt knows the difference between a breaking change and a safe update. When you change the contract, it asks you to version. Old versions keep working for callers that haven't upgraded yet.
Built for teams, not just developers
.prompt files are plain text. Anyone on your team can open them, read them, and understand what your AI is being told. Product, design, and domain experts can finally participate in how your AI behaves.
This is what your
AI actually receives.
... 47 lines of complex branching logic ...if @user_status is high_value do[include: "offers/vip"]else if @user_is_new and @country == 'US' do[include: "offers/onboarding_us"]else[include: "offers/generic"]end
Welcome back, Alice!
Check out our latest VIP member offers exclusively for you.
"Every branch resolved. Every variable filled. Every token accounted for. This is what dot-prompt shows you."
Change the params — the output changes instantly.
From black box
to glass box.
Before dot-prompt
- Prompts buried in application code
- No way to see the assembled output
- Debugging means adding print statements
- Non-developers cannot read or review prompts
- Breaking changes are invisible until production
- Token costs are a mystery
After dot-prompt
- Prompts in readable .prompt files
- See the exact assembled output instantly
- Debug by changing variables in the viewer
- Anyone on the team can open and read prompts
- Breaking changes are caught and versioned automatically
- Token counts visible on every compile
- Reusable prompt pieces that compose automatically
- Static fragments cached — dynamic fragments always fresh
Up and running
in minutes.
Start dot-prompt
Add this docker-compose.yml to your project root and run one command.
docker compose up Then open localhost:4040 — your viewer is running.
services:
dot-prompt:
image: dotprompt/server
ports:
- "4040:4040" # viewer
- "4041:4041" # API
volumes:
- .:/app/repo
environment:
- REPO_PATH=/app/repo
- PROMPTS_PATH=/app/repo/priv/promptsCall the API
Pick your language. dot-prompt works with any language via the container HTTP API or native libraries.
# Terminal
pip install dot-prompt
# Python Code
from dot_prompt import Client
client = Client("http://localhost:4041")
result = client.render(
"my_prompt",
params={"user_level": "intermediate"},
runtime={"user_input": "Hello"}
)
result["prompt"] # send to your LLM
result["response_contract"] # validate the responsecurl -X POST http://localhost:4041/api/render \
-H "Content-Type: application/json" \
-d '{
"prompt": "my_prompt",
"params": {"user_level": "intermediate"},
"runtime": {"user_input": "Hello"}
}'
Readable by humans.
Structured for machines.
.prompt files are plain text with a clean, minimal syntax. One rule: @ means variable. Everything else is prose. Your whole team can read them. Your AI receives only what it needs.
init do ← file setup@version: 1.0@major: 1params:@user_level: enum[beginner, ← typed variablesintermediate, advanced]= intermediate ← sensible defaults-> user experience level ← team documentationfragments:{skill_guide}: static ← compiled from another .prompt filefrom: skills cached, never fetched twicematch: @topic{{recent_history}}: dynamic ← fetched fresh on every call-> last 3 conversation turnsend initHere is the relevant context:{skill_guide} ← expands inline at compile timeRecent conversation:{{recent_history}} ← injected at call timeif @user_is_new is true do ← resolves before AI callWelcome! Let's start withthe basics of @topic.elseWelcome back. Ready tocontinue with @topic?end @user_is_new ← named closing — always clear
- @ means variable — always, everywhere, nothing else
- {} static fragment — compiled from another .prompt file, cached
- {{}} dynamic fragment — fetched fresh on every call, never cached
- Branching resolves at compile time — AI never sees the logic
- Named closing tags — always know what block just ended
- Plain prose — anyone can read it
Open source.
Forever.
dot-prompt is Apache 2.0 open source. The container, the language, the viewer, the MCP server — all of it. No usage limits. No feature paywalls. No strings.
Run it yourself. Use it however you want.
Managed Cloud
A managed service is coming for teams who want prompt hosting, LLM call handling, and analytics without running their own infrastructure.
Be first in line for the cloud beta
One language.
Every tool.
VS Code extension
Syntax highlighting, inline diagnostics, and breaking change notifications — right in your editor. Works whether you're writing or reviewing what an AI agent just wrote.
MCP server
Connect your AI coding tool to dot-prompt. It discovers your prompt schemas, understands your contracts, and writes `.prompt` files correctly without reading documentation.
Native Client Libs
Python Client → pip install dot-prompt
TypeScript Client → npm install @dot-prompt/sdk
Language specification
dot-prompt is an open language specification. Any tool can implement it. Any language can support it. The spec lives at github.com/dot-prompt/spec.
Loved by builders who ship.
"I've worked on five AI projects. Every single one had the same problem — nobody knew what the LLM was actually receiving. dot-prompt is the first tool that actually fixes this."
Sarah Chen
Senior AI Engineer at Vercel
"Our PM can finally read our prompts. That alone was worth it."
David Miller
Engineering Manager at Linear
"We went from 'why is it doing that' to 'oh, that's exactly why' in about ten minutes."
Elena Rodriguez
Founder at Stealth AI
Open the
black box.
See exactly what your AI receives. Understand exactly why it responds the way it does.