The Small Prompt Adjustment I Made That Reduced My AI's Hallucinations By 70%
8 copy-paste prompts that force AI to flag its own uncertain claims before you read them
What You’ll Get
8 copy-paste prompts that force AI to flag its own uncertain claims before you read them
A pre-flight check system that tells you where to trust the AI and where to verify yourself
A citation stress-test that catches fake sources in under 60 seconds
A bonus Hallucination Prevention Checklist you can drop into any prompt instantly
A sequencing guide so you know exactly which prompts to use for quick research vs. client-facing work
AI gave me a stat last week that sounded perfect. Cited a study, named the institution, gave me a year. I put it in a client report. Forty minutes later, I found out the study didn’t exist.
That kind of thing used to happen to me constantly. Not because I was using bad tools, Claude, ChatGPT, et al, are genuinely good. The problem was how I was asking. I was writing prompts that basically invited the AI to make stuff up, and I had no idea I was doing it.
One adjustment changed that. Then a few more. Now I catch about 70% of what used to slip through, and my outputs need way less fact-checking time.
Here’s the full system, starting with the one that made the biggest difference.
The Core Problem
AI models are trained to be helpful. When you ask a vague question, they fill in the gaps confidently. That confidence is the trap.
You’re not getting a lie, exactly. You’re getting the most statistically likely answer, whether or not it’s true. The fix isn’t a better AI. It’s a prompt that makes honesty easier than invention.
The Prompts
Prompt #1: The Uncertainty Gate
What it does: Forces the AI to flag anything it’s not sure about before you have to go hunting for it.
When to use it: Any time you’re asking for facts, statistics, names, dates, or research.
The Prompt:
Before you answer, I want you to rate your confidence on each claim you make.
Use this format:
[HIGH] — you're certain this is accurate
[MED] — you believe it's true but haven't verified
[LOW] — you're inferring or estimating
Topic: [YOUR TOPIC OR QUESTION]
After your answer, list any [MED] or [LOW] claims separately so I know exactly what to verify.
How to use it:
Drop your actual question into the [YOUR TOPIC OR QUESTION] slot
Run it and scan the confidence tags first, before reading the full answer
Copy the verification list into a separate doc and fact-check those items only
Example input: Topic: The history of content moderation policies at major social platforms, 2018-2022
What you’ll get: A structured answer with every shaky claim flagged. Instead of reading 600 words and guessing which parts to check, you get a shortlist.
Pro tip: Add “do not include [LOW] confidence claims in your main answer, move them to the verification list automatically” if you want a cleaner first draft.
Prompt #2: The Source Request Filter
What it does: Separates things the AI actually knows from things it’s pattern-matching into existence.
When to use it: Research tasks, statistics, quotes, or any claim you might repeat to someone else.
The Prompt:
Answer my question below. For every factual claim, do one of three things:
1. Name a specific, real source (publication, study, institution — with year if you know it)
2. Write "(general knowledge)" if it's widely established and you don't have a specific source
3. Write "(inference)" if you're reasoning from related information rather than stating a fact
Do not invent sources. If you don't know the source, say so.
Question: [YOUR QUESTION]
How to use it:
Ask your question normally
Review every claim tagged “(inference)” - those are the ones that need verification
For named sources, spot-check two or three before trusting the rest
Example input: Question: What percentage of B2B buyers read at least three pieces of content before talking to sales?
What you’ll get: Either a real source you can actually find, or an honest “(inference)” tag instead of a made-up stat.
Pro tip: If you get a named source you can’t locate, follow up with: “I can’t find that study. Is it possible you’re misremembering it? Give me your best alternative or mark it as inference.”
Prompt #3: The Contradiction Check
What it does: Makes the AI argue against its own answer to catch errors before you do.
When to use it: After getting any substantial research output or recommendation.
The Prompt:
You just gave me this answer:
[PASTE PREVIOUS AI RESPONSE]
Now I want you to challenge it. Specifically:
- What did you get wrong or oversimplify?
- What important context is missing?
- What would someone who disagrees say?
Be direct. Don't defend the original answer.
How to use it:
Run your normal research prompt first
Paste the full response into this prompt
Use the critique to decide what needs checking or rewriting
Example input: Paste any 200-400 word AI response about industry trends, statistics, or recommendations.
What you’ll get: A genuine self-critique. AI models are surprisingly good at catching their own errors when you explicitly ask them to look.
Pro tip: Run this twice on anything going into client-facing work. The second pass sometimes catches things that the first missed.
You just got 3 prompts that cut the most common hallucination sources.
But catching bad information after it’s generated is only half the problem. The other half is writing prompts that stop the AI from making things up in the first place.
The next 5 prompts handle the prevention side:
How to constrain AI to only what it actually knows
A pre-flight check that runs before every research session
The citation verification prompt that catches fake sources in 30 seconds
A confidence calibration system for ongoing projects
Plus: A copy-paste Hallucination Prevention Checklist you can add to any prompt
