← All terms
Concepts
Hallucination
When a model generates plausible-sounding text that is factually wrong.
Hallucinations are the failure mode that defines the gap between LLM demos and production systems. They are not bugs to be fixed once — they are statistical behaviors to be measured continuously and constrained by design. The standard mitigations are: ground answers in retrieved context (RAG), require citations, run an LLM-as-judge over outputs, constrain output schemas, and lower temperature. Eliminating hallucinations entirely is not currently possible; bounding their cost is.
Related terms
Building with Hallucination?
We ship production AI systems built around concepts like this every quarter. Send a brief and get a written proposal in 48 hours.
Send a brief →