The essence of his argument, as I understand it, is that LLM’s don’t learn through experience what the world is like, but instead get trained to emulate what humans say the world is like.
It seems to be similar to what Yan Lecunn argues also. I wonder what does Sam Altman, Ilya sutskever and other major LLM creators mean when they say they aim to create AGI. Do they acknowledge that the current architecture isn't sufficient as well, or do they think scaling will be enough? Or do they not say at all?
The essence of his argument, as I understand it, is that LLM’s don’t learn through experience what the world is like, but instead get trained to emulate what humans say the world is like.
It seems to be similar to what Yan Lecunn argues also. I wonder what does Sam Altman, Ilya sutskever and other major LLM creators mean when they say they aim to create AGI. Do they acknowledge that the current architecture isn't sufficient as well, or do they think scaling will be enough? Or do they not say at all?