FS Ndzomga
Oct 13, 2023

--

Fair point. Studies after studies have proven than self consistency can help LLM produce more sound answers. I think adding a rigorous logic layer on top makes it even better. But you are right that it won't eliminate all hallucinations altogether.

You mentioned that the best solution would be to build an evaluator that can check LLM answers without hallucinating. I think it is not feasible for now. To build such an evaluator, you would need NLU and language is ambiguous and facts are not rigid like in mathematics, so any evaluation system will always hallucinate, depending on how it understood the request and the different opinions it has in its knowledge base. That's why I recently concluded that there will be no AGI: https://medium.com/p/d9be9af4428d

--

--

FS Ndzomga
FS Ndzomga

Written by FS Ndzomga

Engineer passionate about data science, startups, philosophy and French literature. Built lycee.ai, discute.co and rimbaud.ai . Open for consulting gigs

No responses yet