LLMs don't just hallucinate—they hallucinate confidently. I tested this by asking models to summarize a paper that doesn't exist. A single model produced a polished summary anyway. A multi-model panel challenged the premise and flagged it.
Read full article →Why Multi-Model Consensus Catches Hallucinations That Single Models Miss
How applying the Delphi method to AI caught a hallucination that a single model confidently missed