How to tell (guess?) if a LLM hallucinates? Semantic Entropy may be an answer (beside cross-checking with external sources)
“Our method works by sampling several possible answers to each question and clustering them algorithmically into answers that have similar meanings, which we determine on the basis of whether answers in the same cluster entail each other bidirectionally. That is, if sentence A entails that sentence B is true and vice versa, then we consider them to be in the same semantic cluster.”
https://www.nature.com/articles/s41586-024-07421-0
Good weekend,