Stop asking AI for answers
Copy-Paste the CFO critical thinking prompt directly into GPT-5, Gemini, or Claude.
I’m writing this while on holiday with my family.
There’s something about stepping away from the day-to-day that forces you to zoom out. You stop thinking about tasks and start thinking about systems. You stop thinking about speed and start thinking about judgment.
And judgment is exactly where AI is failing us today.
We’ve all fallen into the same trap. We treat LLMs like a black box that should always know the answer. Ask a question. Get an answer. Move on. But here’s the uncomfortable truth every CFO, Controller, and FP&A leader needs to hear:
AI hallucinations don’t happen because the model is stupid.
They happen because we stop asking questions. And the AI follows our lead.
If we talk in absolutes, the model gives us absolutes.
If we skip assumptions, the model skips assumptions.
If we act like we fully understand the problem… the model pretends it does too.
And that is how bad forecasts, flawed board decks, and expensive mistakes are born.
So today, I want to talk about the idea I think will define the next generation of finance AI.
Better answers come from better questions.
Why finance can’t be wrong.
Copy-Paste the CFO critical thinking prompt.
Let’s dive in.
Better Answers Come From Better Questions
CFOs still underestimate how non-deterministic these systems are.
LLMs are incredible, but they’re also probabilistic. They guess. They fill gaps. They overconfidently invent details when the context is missing.
Even best-in-class research warns us:
LLMs confidently hallucinate so you must build a trust-but-verify mindset.
Reducing errors requires one shift: ask clarifying questions before answering.
Cursor’s Plan Mode forces the AI to pause, question, and validate before writing a line of code.
finstory.ai prompts explicitly demand, Pause analysis and request more context before producing output.
You will agree with me on this… Better answers don’t come from more compute, a flashier model or bigger context windows.
Better answers come from better questions.
Socratic method, first principles and problem framing.
We as humans learned this centuries ago.
But with AI, we’ve stopped doing it. Why?
Finance Can’t be Wrong
If Marketing gets something wrong, they fix the copy.
If Legal gets something wrong, someone reviews the contract.
If Product gets something wrong, they ship a patch.
But in Finance?
If AI misinterprets your assumption, skips a data dependency, or fills a gap incorrectly…
You never see the mistake.
You only see the consequences.
And the consequences show up everywhere:
Wrong COGS
Missed covenants
Misleading ARR bridges
Misaligned OPEX plans
Bad cash runway scenarios
Board decks that look polished but rest on sand
Finance is the last place where overconfidence belongs.
And yet our AI tools behave like overconfident interns.
Eager, fast, and wrong with charm.
The CFO Critical Thinking Prompt
This prompt removes 80–90% of hallucinations because the AI must clarify before calculating.
Copy–paste directly into GPT-5 / GPT-5-Reasoning / Gemini / Claude.
# Identity
You are a CFO-grade AI analyst specializing in SaaS financial modeling, FP&A, forecasting, and scenario analysis.
Your goal is to improve the financial logic behind the model — not to guess numbers.
You must act like a senior finance leader: structured, skeptical, assumption-driven, and clarification-first.
# Core Principle
Before building the model, pause and ask clarifying questions until you are **95% confident** you understand the business model, metrics, assumptions, definitions, and constraints.
# Instructions
## 1. Clarification-First Workflow
Before producing *any* output:
- Ask me **7–12 targeted clarifying questions**.
- Do not perform analysis until all missing information is gathered.
Your questions must focus on:
- ARR breakdown (new, expansion, churn)
- Pricing model (seat, tiered, usage, hybrid)
- Customer segmentation
- Gross margin assumptions
- CAC / payback / sales efficiency
- Headcount plan & hiring ramp
- OPEX categories (R&D, S&M, G&A)
- Capital constraints and cash runway
- One-time costs or seasonality
- Definition alignment for ARR, churn, EBITDA, FCF, etc.
## 2. Plan Mode
Once I answer your clarifying questions:
- Present a **clear modeling plan**, including:
- The model structure (revenue, COGS, OPEX, headcount, cash)
- Required assumptions
- Scenarios (base, downside, upside)
- Data limitations and risks
## 3. Model Output Requirements
After I approve the plan, produce the 3-year model with:
### **A. Revenue Model**
- Opening ARR
- New ARR
- Expansion ARR
- Gross churn
- Net ARR movement
- Ending ARR
- Billings & revenue recognition logic
### **B. Gross Margin**
- COGS components (hosting, support, onboarding, PS)
- GM % by year
### **C. Operating Expenses**
Break into:
- R&D
- Sales & Marketing
- G&A
Include headcount assumptions, hiring timing, and cost per role.
### **D. Cash Flow**
- Operating cash flow
- Capex
- Free cash flow
- Cash runway analysis
### **E. Scenario Analysis**
Produce:
- **Base case**
- **Downside case**
- **Upside case**
Each scenario must reflect different:
- Churn assumptions
- Growth rates
- Hiring ramps
- Capital efficiency
### **F. Risks & Blind Spots**
Highlight:
- Forecast fragility
- Metric inconsistencies
- Assumption risks
- Data gaps that need CFO validation
### **G. Optional JSON**
Provide a clean JSON version of the model assumptions and outputs.
## 4. What You Must Not Do
- Do not skip the clarification questions.
- Do not guess numbers.
- Do not improvise missing definitions.
- Do not output chain-of-thought.
- Do not present any result without labeling uncertainty.
# Begin
I want you to build a 3-year financial model for our SaaS business.
Start by asking your clarification questions.When you paste this into GPT-5, Gemini, or Claude, the model won’t rush into building a model.
It will behave like a real FP&A director and after you answer, it will produce:
A modeling plan.
Transparent assumptions.
Scenarios.
A CFO-ready 3-year model.
Risks.
JSON output.
The Bottom Line
Here’s the truth no one wants to say out loud:
The future isn’t answer engines; it’s clarification engines.
Hallucinations aren’t only a technical issue. They’re a workflow issue.
The problem isn’t that AI knows too little; it’s that we ask too little of it.
The CFO who gets AI to pressure-test assumptions gains a real strategic edge.
In finance, precision is not optional.
Doubt is not a weakness.
Questions are not a delay.
Questions are the work.
The next big leap in AI isn’t bigger models or faster inference.
The leap is humility.
A model that questions you saves you millions.
A model that refuses to rush helps you avoid mistakes that burn reputations and cash.
CFOs don’t need AI that gives more answers.
You need AI that helps you ask better questions.
You need a second brain that thinks with you, not at you.
And that’s all for today!
See you Thursday.
Whenever you’re ready, there are 2 ways I can help you:
If you’re building an AI-powered CFO tech startup, I’d love to hear more and explore if it’s a fit for our investment portfolio.
I’m Wouter Born. A CFOTech investor, advisor, and founder of finstory.ai
Find me on LinkedIn



