‹ Back to glossary

Hallucination

beginner
When an AI generates information that sounds plausible but is factually incorrect. LLMs do not "know" facts — they predict likely word sequences. This can lead to confident-sounding but completely wrong answers. Always verify AI-generated facts with reliable sources.
Related Terms
Large Language Model (LLM)
Questions?
AI Readiness Check All terms
The Native AI Briefing
European AI news, curated and fact-checked. Every 2–3 days. Free.