AI models lie. Not on purpose — they don't have intentions — but they will confidently tell you things that are completely wrong. This is called hallucination, and understanding why it happens makes you a much better AI user.
It's not a search engine
This is the key thing most people get wrong. AI models don't retrieve facts from a database. They predict what text should come next based on patterns learned during training.
When you ask “what year was X founded?” the model doesn't look it up. It generates the most statistically likely answer based on its training data. Usually that's correct. Sometimes it's not, and it has no way to tell the difference.
When hallucinations are most likely
Obscure facts. The more niche the topic, the less training data the model saw, and the more it fills gaps with confident-sounding guesses.
Specific numbers. Dates, statistics, citations, prices — these are high-risk. Models often get the general shape right but the specifics wrong.
Recent events. Models have a training cutoff. Ask about something after that date and it genuinely doesn't know — but it might not say so.
Made-up citations. This is the classic one. Ask an AI to cite sources and it will invent plausible-sounding journal articles, papers, and books that don't exist. Confidently.
When they're less likely
Models are generally reliable for well-documented topics that were heavily covered in their training data — famous historical events, established scientific concepts, widely-used programming languages.
They're also reliable when the task is reasoning through something you've provided, rather than recalling facts. Summarizing a document you pasted in, analyzing data in the prompt, or explaining code — these tasks involve less guessing.
How to actually catch it
Ask the model to explain its reasoning. If it struggles or gets vague, that's a signal.
For important facts, ask it to identify what it's certain about versus what it's inferring. Good models will tell you when they're unsure — if you ask directly.
And always verify specific claims — numbers, dates, names, citations — before using them anywhere that matters.
Are some models worse than others?
Yes, meaningfully so. Models with stronger reasoning tend to hallucinate less because they're better at recognizing the limits of what they know.
Claude in particular has a reputation for being more willing to say “I'm not sure” rather than guessing. GPT models tend to be more confident, which is great when they're right and a problem when they're not.
Compare reliability and reasoning quality across models: See LLM rankings →
The bottom line
Hallucination isn't going away. It's a fundamental property of how these models work, not a bug that will be patched.
The right move is to understand when you're in high-risk territory — obscure facts, specific numbers, anything that requires being exactly right — and verify those things independently.
Use AI for what it's actually good at: reasoning, drafting, summarizing, explaining. Treat it like a smart colleague who sometimes confidently misremembers things, not like an encyclopedia.