Can you explain to me which step in the current training process of LLMs is specifically designed to enhance factuality ? If you want to mention accuracy, then let me just say that accuracy in a next word prediction model measures the ability of the model to correctly predict the next word, not the ability to produce factual sentences. It’s one of the reasons why LLM hallucinate, make up facts… Factuality is just there because of input data and memorization. If a LLM is trained on a dataset containing the same number of these sentences: "The earth is flat" and "The earth is round", it would equally predict flat or round if prompted to complete the sentence "The earth is...". Even though the scientific consensus is that the earth is not flat.
I literally wrote this in the article: “The problem is that we expect AGI to be human-like but without the flaws of humans.” The point of the article is to highlight the contradictions that emerge once we try to anthropomorphize AGI, namely that we want something to be human-like, without the limitations that are inherent from being human-like.