Oct 28, 2023
some recent findings:
1/ LLMs trained on "A is B" are not able to deduce that means "B is A"
2/ LLMs are not good at self reflection
3/ Have you extensively used agents ? Because you would have quickly realized how inefficient they are and how their failure modes are impredictables.
4/ Something better is coming in the future, I have no doubt about that. But not with current autoregressive models.