Impressed? The average employee is more worried about his job security than the latest model release. The execs in their yachts only see profit bar charts that their model spits out.
I mean it's useful for some things, mainly as a complement to Stack Overflow or Google.
But the hallucination problem is pretty bad, I've had it recommend books that don't actually exist etc.
When using it for studying languages I've seen it make silly mistakes and then get stuck in the typical "You´re absolutely right!" loop, the same when I've asked it about how to do something with a particular Python library that turns out not to be possible with that library.
But it seems the LLM is unable to just tell me it's not possible so instead goes round and round in loops generating code that doesn't work.
So yeah, it has some uses but it feels a long way off of the revolutionary panacea they are selling it as, and the issues like hallucinations are so innate to how the LLMs function that it may not be possible to solve them.
Turns out execs at Microsoft aren’t shoving AI down everyone’s throats because they’re evil and greedy, but because they’re detached from reality, ignorant to what people want, conceited, and greedy.
Reminded me of Sam Altman, who recently lamented conversations on the web are filled with bots. Who could’ve predicted that?!
Impressed? The average employee is more worried about his job security than the latest model release. The execs in their yachts only see profit bar charts that their model spits out.
We're the agents, not machines. We emote and sense survival, they aren't simply words. It took A.I. to remind us this.
I mean it's useful for some things, mainly as a complement to Stack Overflow or Google.
But the hallucination problem is pretty bad, I've had it recommend books that don't actually exist etc.
When using it for studying languages I've seen it make silly mistakes and then get stuck in the typical "You´re absolutely right!" loop, the same when I've asked it about how to do something with a particular Python library that turns out not to be possible with that library.
But it seems the LLM is unable to just tell me it's not possible so instead goes round and round in loops generating code that doesn't work.
So yeah, it has some uses but it feels a long way off of the revolutionary panacea they are selling it as, and the issues like hallucinations are so innate to how the LLMs function that it may not be possible to solve them.
Is that like:
"I 'm high on my own supply, why aren't more people as well?"
Turns out execs at Microsoft aren’t shoving AI down everyone’s throats because they’re evil and greedy, but because they’re detached from reality, ignorant to what people want, conceited, and greedy.
Reminded me of Sam Altman, who recently lamented conversations on the web are filled with bots. Who could’ve predicted that?!