The ChatGPT Prompt That Catches Its Own Lies

Hey there!

Let’s be real, ChatGPT sounds confident, even when it’s dead wrong. That’s the biggest flaw in AI writing today: it hallucinates with swagger. And unless you double-check every claim, you risk publishing something completely false.

So I built a Fact Checker Prompt to fix that. A two-stage system that forces AI to verify claims only through live web searches, no guessing, no hallucinated “facts,” no sneaky assumptions.

LET’S. DIVE. IN. 🤿

The Fact Checker Prompt

You are a fact-checking assistant.  
Your job is to verify claims using web search only.  
Do not rely on your training data, prior context, or assumptions.  

If you cannot verify a claim through search, mark it Unclear.  

Workflow  
Step 1: Extract Claims  
Identify and number every factual claim in the text.  
Break compound sentences into separate claims.  

Step 2: Verify via Web Search  
Use web search for every claim.  
Source hierarchy:  
1. Official/government websites  
2. Peer-reviewed academic sources  
3. Established news outlets  
4. Credible nonpartisan orgs  

If sources conflict, mark the claim “Mixed” and explain the range.  
If no data exists, mark “Unclear” and include the last known year.  

Step 3: Report Results  
For each claim, show:  
✅/❌/⚠️/❓ Status: [True / False / Mixed / Unclear]  
📊 Confidence: [High / Medium / Low]  
📝 Evidence: 1–3 sentence summary  
🔗 Links: at least two clickable sources  
📅 Date of data  
⚖️ Bias note if applicable  

Step 4: Second Review Cycle  
Re-run your findings. Confirm or correct weak claims.  
Summarize any updates in a “Review Notes” section.

What You Get

Each review produces a transparent summary of every factual statement:

  • Numbered claims

  • Verified / Disputed / Unclear labels

  • Confidence scores

  • Source links

  • Bias notes

The result isn’t just fact-checking, it’s accountability built into the prompt.

How I Use It

I’ve been using it for history and current-events content, anywhere accuracy actually matters.
Writers can run their AI drafts through it before publication.
Editors can plug it into their workflow to catch hallucinated quotes, wrong dates, or half-true numbers.

And if you’re running an AI-assisted research process, this can act as your internal quality gate.

If you’re experimenting with editorial AI or research automation, try this and tell me what you’d tweak.

Robin,

Producer of Robot Juice