<vetted />
AI Concepts
Term 27 of 68

Hallucination

When AI models generate plausible-sounding but factually incorrect information.

Full Definition3 paragraphs

Hallucination in AI refers to instances where language models generate content that sounds confident and plausible but is factually incorrect, fabricated, or unsupported by evidence. This is one of the most significant challenges in deploying LLM-powered applications, as hallucinated content can mislead users and damage trust.

Hallucinations occur because LLMs are fundamentally pattern-matching systems trained to produce likely text continuations, not truth-verification systems. They can invent citations, create false facts, and confidently provide wrong answers, especially for topics outside their training data or requiring precise factual recall.

Mitigation strategies include: RAG (grounding responses in retrieved documents), asking models to cite sources, using multiple models for verification, implementing confidence scoring, adding human review for critical applications, and prompt engineering that encourages acknowledgment of uncertainty. Engineers must design systems that minimize hallucination impact and set appropriate user expectations.

Key Concept

When AI models generate plausible-sounding but factually incorrect information.

Apply your knowledge

Master AI Development

Join our network of elite AI-native engineers.