It feels like any time scale on AGI is basically just made up. Since no one has any idea of how to get there, how could you possibly estimate how long it will take? We could stumble on some secret technique that unlocks AGI tomorrow or it could be literally impossible. You might as well ask how long until humans can cast magic spells.
It's not quite the same category as magic spells. Kurzweil's prediction has been for 2029 for the last thirty years or so based on Moore's law type stuff. The logic I think is roughly project the hardware improvements, which has worked well, and then add on about five years to sort the software. Time will tell on the second one.
The people most qualified to make an educated guess simultaneously have a direct financial incentive to claim that it is in reach within a few years. The only one who doesn't seem to care all that much is Le Cun.
To have an alternative? So far the work on transformers was done in a large part in the open (except OpenAI which tried to be as closed as possible). There is zero guarantee that whoever discovers the path to AGI decides to publish the paper etc. It's just one of many reasons (a less important one IMO) why research into novel AI approaches is valuable for humanity.
When you see these claims it's important to frame the assertion contextually as in the transformer generation, AGI is 10+ years away. This does not, however, account for the next architecture that will do more with less.
The most interesting thing in this whole picture is not AGI, it's how the collective intelligence works. CEOs claim the AGI is near because that's how they manipulate the public. But the public knows that it's only a manipulation. So how come the manipulation is still possible?
Sunk cost fallacy? What percentage of the public has actually invested the billions/trillions and is now demanding something to show for it? I'm not sure the average joe wants copilot in their outlook, but sure as hell someone wants it in there
because people hedge their bets almost always. basically how likely something is vs costs vs what everybody else is doing vs how you are personally affected.
So in case of the current AI there are several scenarios where you have to react to it. For example as a CEO of a company that would benefit from AI you need to demonstrate you are doing something or you get attacked for not doing enough.
As a CEO of an AI producing company you have almost no idea if the stuff you working on will be the thing that say makes hallucination-free LLMs, allows for cheap long term context integration or even "solve AGI". you have to pretend that you are just about to do the latter tho.
Because "the public" isn't one person or even one cohesive group. Some see the manipulation, and point it out. Others don't see it, or ignore it when it's pointed out.
And why ignore it? Because they don't want to believe it's manipulation, because it promises large numbers of dollars, and they want to believe that those are real.
If I had an AGI that designs me a safe, small and cheap fusion reactor, of course I would be interested in that.
My intelligence is intrinsically limited by my biology. The only way to really scale it up is to wire stuff into my brain, and I'd prefer an AGI over that every day.
Serious question. Do some of the large firms run LLM with no guardrails? I'm guessing they are doing constant research not available to the public. What results have been found? I'm not necessarily saying AGI, but what happens when the systems are not hindered by humans?
Also, somewhat related, the model/system that was reported by the Google whistleblower about LaMDA was very interesting for the time, especially considering the transcript. What happens when the guardrails are disabled? Even if it wasn't sentient, it's behavior might be reason for concern.
According to Clarke's First Law, "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
With all due respect to Arthur C. Clarke, I think science education is the only thing standing between us and even bigger scams and absolute chaos in the streets.
What is understood by a scientist isn't so far ahead from what the layperson understands these days compared to when Clarke wrote that.
"AGI" was hijacked to mean something else and was turned into a scam.
What it "really means" is more mass layoffs to power AI infrastructure for that to power so-called "AI agents" to achieve a 10% increase in global unemployment in the next 5 years.
From the "benefit of humanity", then to the entire destruction of knowledge workers and now to the tax payer even if it costs another $10T to bailout the industry from staggeringly giant costs to run all of it.
Once again, AGI is now nothing but a grift. The crash will be a spectacle for the ages.
It seems to me by most classical definitions we've basically already reached AGI.
If I were to show Gemini 3 Pro to anyone in tech 10 years ago they would probably say Gemini 3 is an AGI, even if they acknowledged there was some limitations there.
The definition has moved so much that I'm not convinced that even if we see further breakthroughs over the next 10 years people will say we've finally reached AGI because even at that point it's probable there might still be 0.5% of tasks it struggles to compete with humans on. And we're going to have similar endless debates about ASI and the consciousness of AI.
I think all that matters really is utility of AI systems broadly within society. While a self-driving car may not be an AGI, it will displace jobs and fundamentally change society.
The achievement of some technical definition of AGI on the other hand is probably not all that relevant. Even if goal posts stop moving from today and advancements are made such that we finally get 51% of experts agreeing that AGI has been reached there could still be 49% of expert who argue that it hasn't. On the other hand, one will be confused about whether their job has been replaced by an AI system.
I'm sorry - I know this is a bit of a meta comment. I do broadly agree with the article. I just struggle to see why anyone cares unless hitting that 51/49% threshold in opinion on AGI correlates to something tangible.
LLMs do NOT have intelligence. Achieving AGI would mean solving self-driving cars, replacing programmers, scientist, etc. All things LLMs are currently unable to do in a way that replaces humans.
There's a huge gap between what Gemini 3 can do and what AGI promises to do. It's not just a minor "technical definition".
> The architecture might just be wrong for AGI. LeCun’s been saying this for years: LLMs trained on text prediction are fundamentally limited. They’re mimicking human output without human experience.
Yes, and most with a background in linguistics or computer science have been saying the same since the inception of their disciplines. Grammars are sets of rules on symbols and any form of encoding is very restrictive. We haven't come up with anything better yet.
The tunnel vision on this topic is so strong that many don't even question language itself first. If we were truly approaching AGI anytime soon, wouldn't there be clearer milestones beforehand? Why must I peck this message out, and why must you scan it with your eyes only for it to become something else entirely once consumed? How is it that I had this message entirely crystalized instantly in my mind, yet it took me several minutes of deliberate attention to serialize it into this form?
Clearly, we have an efficiency problem to attack first.
>Yes, and most with a background in linguistics or computer science have been saying the same since the inception of their disciplines
I'm not sure what authority linguists are supposed to have here. They have gotten approximately nowhere in the last 50 years. "Every time I fire a linguist, the performance of the speech recognizer goes up".
>Grammars are sets of rules on symbols and any form of encoding is very restrictive
But these rules can be arbitrarily complex. Hand-coded rules have a pretty severe complexity bounds. But LLMs show these are not in principle limitations. I'm not saying theory has nothing to add, but perhaps we should consider the track record when placing our bets.
I'm very confused by your comment, but appreciate that you have precisely made my point. There are no "bets" with regard to these topics. How do you think a computer works? Do you seriously believe LLMs somehow escape the limitations of the machines they run on?
"Language" is an input/output interface. It doesn't define the internals that produce those inputs and outputs. And between those inputs and outputs sits a massive computational process that doesn't operate on symbols or words internally.
And, what "clearer milestones" do you want exactly?
To me, LLMs crushing NLU and CSR was the milestone. It was the "oh fuck" moment, the clear signal that old bets are off and AGI timelines are now compressed.
Language massively restricts LLMs because there's no way to create novel concepts while limited to existing language.
Humans create new words and grammatical constructs all the time in the process of building/discovering new things. This is true even in math, where new operators are created to express new operations. Are LLMs even capable of this kind of novelty?
There's also the problem that parts of human experience are inexpressible in language. A very basic example is navigating 3D space. This is not something that had to be explained to you as a baby, your brain just learned how to do it. But this problem goes deeper. For instance, intuition about the motion of objects in space. Even before Newton described gravitation every 3 year old still knew that an object that's dropped would fall to the ground a certain way. Formalizing this basic intuition using language took thousands of years of human development and spurred the creation of calculus. An AI does not have these fundamental intuitions nor any way to obtain them. Its conception of the world is only as good as the models and language (both mathematical and spoken) we have to express it.
> Its conception of the world is only as good as the models and language (both mathematical and spoken) we have to express it.
Which is pretty damn good, all things considered.
And sure, training set text doesn't contain everything - but modern AIs aren't limited to just the training set text. Even in training stage, things like multimodal inputs and RLVR have joined the fray.
I don't think "create novel concepts" is a real limitation at all. Nothing prevents an AI from inventing new notations. GPT-4o would often do that when talking to AI psychosis victims.
Language is an interface between whatever our thoughts actually are and the outside world.
Imagine trying to write apps without thinking about the limitations of the APIs you use. In fact we just recently escaped that same stupidity in the SaaS era! That's how silly LLMs will seem in the near future. They will stick around as the smarter chatbots we've wanted for so long, but they are so very far away from AGI.
Could you please make your substantive points without breaking the site guidelines? They include:
"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3."
> It's really hard to argue that LLM's don't have intelligence in the way the earthworms do.
It's really easy to argue, actually. LLMs have intelligence the way humans online do. An earthworm is highly specialized for what it does and exists in a completely different context - I doubt an LLM would be successful guiding a robotic earthworm around since all it knows about earthworms is what researchers have observed and documented. The actual second-to-second experience of being an earthworm is not accessible as training data to an LLM.
Edit:
This is true almost by definition. An LLM (Large Language Model) can't have intelligence that's not expressible in language and earthworms are notoriously shy in interviews.
Doing your laundry is a sign of general intelligence? If someone is a quadriplegic and cant do their own laundry, does that mean they arent intelligent?
If this is true then almost all of the jobs in the world would already be replaced by AI. Sure, the model might be better than most people at lots of things but there are still tasks that human children find easy and AI struggles with. I wouldn't call being better at something than humans being "smarter" than them; if you do, calculators must be smarter than humans since they have been better at adding up numbers than us for a long time.
For text and image-based tasks they are infinitely better than a human.
What they lack are arms to interact with the physical world, but once this is done this is a giant leap forward (example: they will obviously be able to do experiments to discover new molecules by translating their steps-by-steps reasoning to physical actions, to build more optimized cars, etc).
For now human is smarter in some real-world or edge cases (e.g. super specialist in a specific science), but for any scientific task an average human is very very weak compared to the LLMs.
There are forms of science that don't involve "arms". Why don't we see a single research paper involving research entirely undertaken by AI? AI development and research itself doesn't need "arms". Why don't we just put AI in a box and let it infintely improve itself? Why doesn't every company that employs someone who just uses a computer replace them with AI? Why are there no businesses entirely run by AIs that just tell humans what to do. Why don't the AIs just use CAD and electronic simulation to design themselves some "arms"? Why can't AI even beat basic videogames that children can beat?
>>For text and image-based tasks they are infinitely better than a human.
Sometimes. When the stars align and you roll the dice the right way. I'm currently using ChatGPT 5.1 to put together a list of meals for the upcoming week, it comes up with a list(very good one!), then it asks if I want a list of ingredients, I say yes, and the ingredients are completely bollocks. Like it adds things which are not in any recipe. I ask about it, it says "sorry, my mistake, here's the list fixed now" and it just removed that thing but added something else. I ask why is that there, and I shit you not, it replied with "I added it out of habit" - like what habit, what an idiotic thing to say. It took me 3 more attempts to get a list that was actually somewhat correct, although it got the quantities wrong. "infinitely better than a human at text based tasks" my ass.
I would honestly trust a 12 year old child to do this over this thing I'm supposedly paying £18.99/month for. And the company is valued at half a trillion dollars. I honestly wonder if I'm the bigger clown or if they are.
At analyzing and reproducing language.. words, code etc sure because at their core they are still statistical models of language. But there seems to be growing consensus that intelligence requires modeling more than words.
Not sure why you are being downvoted. Seems like lot of people with high ego are not ready to accept the truth that a human has way less knowledge than a world encyclopedia with infinite and practically perfect memory.
Otherwise researching intelligence in animals would be a completely futile pursuit since they have no way of "knowing" facts communicated in human language.
>>Seems like lot of people with high ego are not ready to accept the truth that a human has way less knowledge than a world encyclopedia.
Well, thank you for editing your own comment and adding that last bit, because it really is the crux of the issue and the reason why OP is being downvoted.
Having all of the worlds knowledge is not the same as being smart.
It feels like any time scale on AGI is basically just made up. Since no one has any idea of how to get there, how could you possibly estimate how long it will take? We could stumble on some secret technique that unlocks AGI tomorrow or it could be literally impossible. You might as well ask how long until humans can cast magic spells.
It's not quite the same category as magic spells. Kurzweil's prediction has been for 2029 for the last thirty years or so based on Moore's law type stuff. The logic I think is roughly project the hardware improvements, which has worked well, and then add on about five years to sort the software. Time will tell on the second one.
The people most qualified to make an educated guess simultaneously have a direct financial incentive to claim that it is in reach within a few years. The only one who doesn't seem to care all that much is Le Cun.
He's under a direct financial incentive to claim otherwise.
If AGI is reachable in 5 years with today's architectures, then why would anyone fund his pet research in novel AI architectures?
To have an alternative? So far the work on transformers was done in a large part in the open (except OpenAI which tried to be as closed as possible). There is zero guarantee that whoever discovers the path to AGI decides to publish the paper etc. It's just one of many reasons (a less important one IMO) why research into novel AI approaches is valuable for humanity.
When you see these claims it's important to frame the assertion contextually as in the transformer generation, AGI is 10+ years away. This does not, however, account for the next architecture that will do more with less.
This is inconsistent with people like Hassabis or Sutskever giving time frames while also saying that LLMs won't get us to AGI.
We might start by trying to define what AGI exactly is. It's an elusive goal.
Once it can extrapolate instead of just interpolate from the training data?
As every consultant will eventually respond as that conversation sputters: it might be easier for us to define what AGI isn’t.
that post didn't even define AGI, right?
The most interesting thing in this whole picture is not AGI, it's how the collective intelligence works. CEOs claim the AGI is near because that's how they manipulate the public. But the public knows that it's only a manipulation. So how come the manipulation is still possible?
Sunk cost fallacy? What percentage of the public has actually invested the billions/trillions and is now demanding something to show for it? I'm not sure the average joe wants copilot in their outlook, but sure as hell someone wants it in there
because people hedge their bets almost always. basically how likely something is vs costs vs what everybody else is doing vs how you are personally affected.
So in case of the current AI there are several scenarios where you have to react to it. For example as a CEO of a company that would benefit from AI you need to demonstrate you are doing something or you get attacked for not doing enough.
As a CEO of an AI producing company you have almost no idea if the stuff you working on will be the thing that say makes hallucination-free LLMs, allows for cheap long term context integration or even "solve AGI". you have to pretend that you are just about to do the latter tho.
Bots and paid trolls online pushing a narrative.
With a whole manual of rhetorical tactics.
Because "the public" isn't one person or even one cohesive group. Some see the manipulation, and point it out. Others don't see it, or ignore it when it's pointed out.
And why ignore it? Because they don't want to believe it's manipulation, because it promises large numbers of dollars, and they want to believe that those are real.
I keep reading of an "AI race" but like "AGI" the meaning is unclear, namely the definition of "win", "lose" or "tie"
For example, a "space race" might be "won" by the first participant to reach space, or to reach the moon
Is it possible to have a "race" without a time limit or a finish line or some way to determine the "winner"
We don't even have an agreed measure for it, that's how far away we are.
Is there an RFC being developed for AGI?
This is slop right?
>This isn’t a minor gap; it’s a fundamental limitation.
>His timeline? At least a decade, probably much longer.
>What does that mean? Simply throwing more computing power and data at current models isn’t working anymore.
>His timeline for truly useful agents? About ten years.
It has the logical inconsistency of good LLM slop like:
"AGI is not possible"
combined with
"Does this mean AI progress will stall? Absolutely not."
Yup, clocked it in seconds. There's something especially perverse about reading AI slop waxing poetic about AI slop.
You can tell by the post history.
It's just like with the fake StackOverflow reputation and fake CodeProject articles in the past.
Same people at it again but super-charged.
It's not possible even in 10 years (.. but maybe in 11).
What a shift in the last 5 years (never -> 100 years -> 11)
ChatGPT has done too many things that "a computer can't do". The "AI effect" denial is strong, but it has its limits.
“ Machines will be capable, within twenty years, of doing any work that a man can do.” - Herbert Simon, 1965
I think the best take in AGI is Edsger Dijkstra's:
I am not interested in computers that have their own intelligence but I do want computers that increase my own intelligence.I've read this multiple times and disagree.
If I had an AGI that designs me a safe, small and cheap fusion reactor, of course I would be interested in that.
My intelligence is intrinsically limited by my biology. The only way to really scale it up is to wire stuff into my brain, and I'd prefer an AGI over that every day.
I get the idea.
But if I can't understand how and why the fusion reactor is safe, small and cheap, I wouldn't consider it safe.
Very much the same way that I don't take Claude Code's changes to my code without understanding what it does.
Augmenting my intelligence is non-negotiable. I want to be in control.
It is the classic "trust but verify". And I need my own intelligence to verify.
Well said!
Serious question. Do some of the large firms run LLM with no guardrails? I'm guessing they are doing constant research not available to the public. What results have been found? I'm not necessarily saying AGI, but what happens when the systems are not hindered by humans?
Also, somewhat related, the model/system that was reported by the Google whistleblower about LaMDA was very interesting for the time, especially considering the transcript. What happens when the guardrails are disabled? Even if it wasn't sentient, it's behavior might be reason for concern.
Ah great more "when will we hit AGI" speculation, lets keep them coming. Some say 2 years, some say 5, some say never.
According to Clarke's First Law, "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
With all due respect to Arthur C. Clarke, I think science education is the only thing standing between us and even bigger scams and absolute chaos in the streets.
What is understood by a scientist isn't so far ahead from what the layperson understands these days compared to when Clarke wrote that.
"AGI" was hijacked to mean something else and was turned into a scam.
What it "really means" is more mass layoffs to power AI infrastructure for that to power so-called "AI agents" to achieve a 10% increase in global unemployment in the next 5 years.
From the "benefit of humanity", then to the entire destruction of knowledge workers and now to the tax payer even if it costs another $10T to bailout the industry from staggeringly giant costs to run all of it.
Once again, AGI is now nothing but a grift. The crash will be a spectacle for the ages.
It seems to me by most classical definitions we've basically already reached AGI.
If I were to show Gemini 3 Pro to anyone in tech 10 years ago they would probably say Gemini 3 is an AGI, even if they acknowledged there was some limitations there.
The definition has moved so much that I'm not convinced that even if we see further breakthroughs over the next 10 years people will say we've finally reached AGI because even at that point it's probable there might still be 0.5% of tasks it struggles to compete with humans on. And we're going to have similar endless debates about ASI and the consciousness of AI.
I think all that matters really is utility of AI systems broadly within society. While a self-driving car may not be an AGI, it will displace jobs and fundamentally change society.
The achievement of some technical definition of AGI on the other hand is probably not all that relevant. Even if goal posts stop moving from today and advancements are made such that we finally get 51% of experts agreeing that AGI has been reached there could still be 49% of expert who argue that it hasn't. On the other hand, one will be confused about whether their job has been replaced by an AI system.
I'm sorry - I know this is a bit of a meta comment. I do broadly agree with the article. I just struggle to see why anyone cares unless hitting that 51/49% threshold in opinion on AGI correlates to something tangible.
LLMs do NOT have intelligence. Achieving AGI would mean solving self-driving cars, replacing programmers, scientist, etc. All things LLMs are currently unable to do in a way that replaces humans.
There's a huge gap between what Gemini 3 can do and what AGI promises to do. It's not just a minor "technical definition".
> The architecture might just be wrong for AGI. LeCun’s been saying this for years: LLMs trained on text prediction are fundamentally limited. They’re mimicking human output without human experience.
Yes, and most with a background in linguistics or computer science have been saying the same since the inception of their disciplines. Grammars are sets of rules on symbols and any form of encoding is very restrictive. We haven't come up with anything better yet.
The tunnel vision on this topic is so strong that many don't even question language itself first. If we were truly approaching AGI anytime soon, wouldn't there be clearer milestones beforehand? Why must I peck this message out, and why must you scan it with your eyes only for it to become something else entirely once consumed? How is it that I had this message entirely crystalized instantly in my mind, yet it took me several minutes of deliberate attention to serialize it into this form?
Clearly, we have an efficiency problem to attack first.
>Yes, and most with a background in linguistics or computer science have been saying the same since the inception of their disciplines
I'm not sure what authority linguists are supposed to have here. They have gotten approximately nowhere in the last 50 years. "Every time I fire a linguist, the performance of the speech recognizer goes up".
>Grammars are sets of rules on symbols and any form of encoding is very restrictive
But these rules can be arbitrarily complex. Hand-coded rules have a pretty severe complexity bounds. But LLMs show these are not in principle limitations. I'm not saying theory has nothing to add, but perhaps we should consider the track record when placing our bets.
I'm very confused by your comment, but appreciate that you have precisely made my point. There are no "bets" with regard to these topics. How do you think a computer works? Do you seriously believe LLMs somehow escape the limitations of the machines they run on?
And what are the limitations of the machines they run on?
We're yet to find any process at all that can't be computed with a Turing machine.
Why do you expect that "intelligence" is a sudden outlier? Do you have an actual reason to expect that?
Is everything really just computation? Gravity is (or can be) the result of a Turing machine churning away somewhere?
>We're yet to find any process at all that can't be computed with a Turing machine.
Life. Consciousness. A soul. Imagination. Reflection. Emotions.
What do you think these in principle limitations are that preclude a computer running the right program from reaching general intelligence?
Why would language restrict LLMs?
"Language" is an input/output interface. It doesn't define the internals that produce those inputs and outputs. And between those inputs and outputs sits a massive computational process that doesn't operate on symbols or words internally.
And, what "clearer milestones" do you want exactly?
To me, LLMs crushing NLU and CSR was the milestone. It was the "oh fuck" moment, the clear signal that old bets are off and AGI timelines are now compressed.
Language massively restricts LLMs because there's no way to create novel concepts while limited to existing language.
Humans create new words and grammatical constructs all the time in the process of building/discovering new things. This is true even in math, where new operators are created to express new operations. Are LLMs even capable of this kind of novelty?
There's also the problem that parts of human experience are inexpressible in language. A very basic example is navigating 3D space. This is not something that had to be explained to you as a baby, your brain just learned how to do it. But this problem goes deeper. For instance, intuition about the motion of objects in space. Even before Newton described gravitation every 3 year old still knew that an object that's dropped would fall to the ground a certain way. Formalizing this basic intuition using language took thousands of years of human development and spurred the creation of calculus. An AI does not have these fundamental intuitions nor any way to obtain them. Its conception of the world is only as good as the models and language (both mathematical and spoken) we have to express it.
> Its conception of the world is only as good as the models and language (both mathematical and spoken) we have to express it.
Which is pretty damn good, all things considered.
And sure, training set text doesn't contain everything - but modern AIs aren't limited to just the training set text. Even in training stage, things like multimodal inputs and RLVR have joined the fray.
I don't think "create novel concepts" is a real limitation at all. Nothing prevents an AI from inventing new notations. GPT-4o would often do that when talking to AI psychosis victims.
Language is an interface between whatever our thoughts actually are and the outside world.
Imagine trying to write apps without thinking about the limitations of the APIs you use. In fact we just recently escaped that same stupidity in the SaaS era! That's how silly LLMs will seem in the near future. They will stick around as the smarter chatbots we've wanted for so long, but they are so very far away from AGI.
And? Even if I believed this to be a limitation, I could bolt an adapter to an LLM to make it input and output non-text data.
That's how a lot of bleeding edge multimodals work already. They can take and emit images, sound, actions and more.
[flagged]
Could you please make your substantive points without breaking the site guidelines? They include:
"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3."
https://news.ycombinator.com/newsguidelines.html
> It's really hard to argue that LLM's don't have intelligence in the way the earthworms do.
It's really easy to argue, actually. LLMs have intelligence the way humans online do. An earthworm is highly specialized for what it does and exists in a completely different context - I doubt an LLM would be successful guiding a robotic earthworm around since all it knows about earthworms is what researchers have observed and documented. The actual second-to-second experience of being an earthworm is not accessible as training data to an LLM.
Edit:
This is true almost by definition. An LLM (Large Language Model) can't have intelligence that's not expressible in language and earthworms are notoriously shy in interviews.
[flagged]
AGI has already happened.
Grok4 and Gemini 3 Pro top models are around the 125-130IQ range. They are rapidly moving towards ASI.
Can they please do my laundry then?
Doing your laundry is a sign of general intelligence? If someone is a quadriplegic and cant do their own laundry, does that mean they arent intelligent?
My point is that your example is a one trick pony. An impressive and useful one trick pony, granted.
AGI is currently undefined, so any argument about it is meaningless, unless it's in aid of developing a definition.
An AI that knows how to do laundry, but is unable to perform said task is useless without the ability. But is it AGI with just the knowledge?
What a day to be alive, when people arguing if AGI will be possible in the next 10 years or 20 years.
It's plausible, in 2026, we are going to send to Mars fully autonomous humanoid robots, so AGI is right around the corner.
https://youtu.be/TsNc4nEX3c4?t=5
That's hilarious.
You can tell Elon doesn't even believe it's that close to pull off that little stunt. Fucking with his investors. Hilarious.
This is satire, right? As in, you are being ironic right now?
They must be!
There's not enough Kool aid in the world...
This has been an argument from the 60s onwards
The paid models are already smarter than the vast majority of people.
Majority of people think they are better than average drivers.
Surely those models are not smarter than _you_, right?
Half of all drivers are better than the average driver. Half of all people have an IQ lower than 100.
If this is true then almost all of the jobs in the world would already be replaced by AI. Sure, the model might be better than most people at lots of things but there are still tasks that human children find easy and AI struggles with. I wouldn't call being better at something than humans being "smarter" than them; if you do, calculators must be smarter than humans since they have been better at adding up numbers than us for a long time.
For text and image-based tasks they are infinitely better than a human.
What they lack are arms to interact with the physical world, but once this is done this is a giant leap forward (example: they will obviously be able to do experiments to discover new molecules by translating their steps-by-steps reasoning to physical actions, to build more optimized cars, etc).
For now human is smarter in some real-world or edge cases (e.g. super specialist in a specific science), but for any scientific task an average human is very very weak compared to the LLMs.
There are forms of science that don't involve "arms". Why don't we see a single research paper involving research entirely undertaken by AI? AI development and research itself doesn't need "arms". Why don't we just put AI in a box and let it infintely improve itself? Why doesn't every company that employs someone who just uses a computer replace them with AI? Why are there no businesses entirely run by AIs that just tell humans what to do. Why don't the AIs just use CAD and electronic simulation to design themselves some "arms"? Why can't AI even beat basic videogames that children can beat?
The gap is huge.
There is a lot of learning involved in getting to be able run experiments in some areas.
What they also don't have is agency to just decide to quit, for example.
>>For text and image-based tasks they are infinitely better than a human.
Sometimes. When the stars align and you roll the dice the right way. I'm currently using ChatGPT 5.1 to put together a list of meals for the upcoming week, it comes up with a list(very good one!), then it asks if I want a list of ingredients, I say yes, and the ingredients are completely bollocks. Like it adds things which are not in any recipe. I ask about it, it says "sorry, my mistake, here's the list fixed now" and it just removed that thing but added something else. I ask why is that there, and I shit you not, it replied with "I added it out of habit" - like what habit, what an idiotic thing to say. It took me 3 more attempts to get a list that was actually somewhat correct, although it got the quantities wrong. "infinitely better than a human at text based tasks" my ass.
I would honestly trust a 12 year old child to do this over this thing I'm supposedly paying £18.99/month for. And the company is valued at half a trillion dollars. I honestly wonder if I'm the bigger clown or if they are.
At analyzing and reproducing language.. words, code etc sure because at their core they are still statistical models of language. But there seems to be growing consensus that intelligence requires modeling more than words.
But they don't have agency and who would trust them unattended anyway (at their current capabilities)?
I could say the same for many people that I know.
Not sure why you are being downvoted. Seems like lot of people with high ego are not ready to accept the truth that a human has way less knowledge than a world encyclopedia with infinite and practically perfect memory.
Knowledge != Intelligence
Otherwise researching intelligence in animals would be a completely futile pursuit since they have no way of "knowing" facts communicated in human language.
>>Seems like lot of people with high ego are not ready to accept the truth that a human has way less knowledge than a world encyclopedia.
Well, thank you for editing your own comment and adding that last bit, because it really is the crux of the issue and the reason why OP is being downvoted.
Having all of the worlds knowledge is not the same as being smart.