Tech enthusiast will judge ai based on what it gets right, we’re interested in what “can” do. Everyone else will judge ai based on where it fails, they are interested in what “problems” it “does” solve.
> a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.
They see:
A computer software generally unreliable and unable to accomplish basic tasks
> A computer software generally unreliable and unable to accomplish basic tasks
Yeah specifically to your quote: it's very easy to create some images and video. It's very hard to create exactly what you need if you have specific needs.
It's almost as if content creation is hard! Well that's because it is. You need to know the client, understand their needs, make the content in line with their other visual language etc.
What AI makes easier if for things to look professional. But a real professional doesn't just make it look good but also makes it what you need.
Where AI comes in is as a helper, and for those situations where "good enough" suffices. And there are many of those situations. Many of which would not have had the budget for a real pro to do it anyway.
I'd dispute this, as I count myself as a tech enthusiast but I'm an enthusiast for tech which works well. I increasingly find myself having to put up with stuff that doesn't work well, and this AI investment instead of fixing the stuff that Windows is routinely doing to make my working day harder is infuriating.
Also, in my experience, it's the non-tech-enthusiasts who are diving into LLMs because they don't understand what is actually going on and it basically looks like a repeat of the whole thing about ELIZA a few decades ago. Just this time it's vastly more expensive and has to run on a datacentre and can write you an essay instead of just rephrasing your question.
a. Click on a directory in my File Explorer and it opens immediately, it always shows the correct headers, sorting on any column is nearly instant (up until somewhere in XP probably)
b. Where I am now in Windows 10 sorting can take forever and because I haven't re-installed in ages it refuses to remember folder views and will constantly change them to whatever it wants
c. In the future saying
- "Winny, open folder ABC and sort it by DEF please"
- "Folder ABC deleted, except for def.txt"
- "NO, I said open it, not delete it! Get it back!"
I am impressed by AI. I just don't want to use it. "Look at how realistic this extruded text looks from a distance!" is definitely an achievement. It just doesn't add to my life in any useful way.
Real question for me is how often do I want to generate a image, video or have a conversation with computer. For me really quite rarely.
And no replacing your customer support with chat bot does not make it better. Just make a damm website with everything I need. Lot less errors, lot simpler for me to do what I want.
>It just doesn't add to my life in any useful way.
Most products don't add value into our lives imo. They are the means by which we get money flowing which is needed to keep the economy alive. Some might argue that they actually subtract from it hence the need for dopaminergic products. The question for the tech CEOs is how to make LLMs reliably dopaminergic in the way Instagram/Tik Tok and the like are.
I would invite you to read the other comments on this post by people who have tried to use it for that, and found that it makes regular fundamental errors.
Most of the time you're better off reading a few responses to a given question (on, say, Stack Overflow) and synthesizing your own understanding out of them, rather than taking one that an AI has synthesized for you.
Do you verify regularly against real documentation/outside sources/subject authorities/question the output? I do, and regularly get wrong information from premier LLMs. I still use it for information retrieval because interrogating large corpora of text and double checking key information can still be faster, but I'm not fully convinced it's long term beneficial for my intellect or knowledge retention.
> The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.
The image and video generation capabilities of AI is the most unimpressive part of AI's! It's the LLM's that are the most inpressive. Those might, just might, even make some sense in an OS, since plenty of people are happy to outsource a quick email or script to AI. Hell, what if your OS had a built-in AI to troubleshoot bugs for you? That might even conceivably be an improvement.
> Over in the comments, some users pushed back on the CEO's use of the word "unimpressed," arguing that it's not the technology itself that fails to impress them, but rather Microsoft's tendency to put AI into everything just to appease shareholders instead of focusing on the issues that most users actually care about, like making Windows' UI more user-friendly similar to how it was in Windows 7, fixing security problems, and taking user privacy more seriously.
I'm sure adding AI to Windows would make privacy problems even worse. Not to mention agentic AI could create a whole new class of security problems if not implemented carefully.
I am still puzzled by how you can be a human being in this world and not be impressed by people.
The more we study AI, the more we discover one fundamental truth above an the others: people are really, really, really, really impressive.
A human being still absolutely melts an LLM like a Salvador Dali clock if the challenge is to innovate. In the race between human potential and LLM potential it isn't even close. Not a little close. Not remotely close. About as close as we are to each other compared to how close we each are to the sun.
And yet somehow there's always some fucking moron who apparently never considered this for a fraction of a second and runs around screaming that a fucking program will replace a fucking person
Yes people are amazing but they are also tedious, annoying, frustrating, unpredictable and undependable. Finding someone that will flow and synergize with you on a project is like winning the lottery. People love to talk the talk but when it’s time to walk the walk the reality and disappointment sets in. Work with enough people and this painful reality sets in very quickly.
However I don’t want an agentic or “A.I.” operating system.
It's not my problem that it's still challenging to find and work with people. I'm saying that it's where all most valuable the rewards are to be had. Unless I was mistaken and the goal is to do what is easy
...
We are impressed by AI, and are using on our tools, what we don't want is to be force feeded into using AI in every single corner of Microsoft products, all of them.
I am not unimpressed, I hate with passion to click away intrusive ”Use Copilot" prompts in MS products. AI is deeply unethical shit. Stolen content transformed under massive waste of energy, i.e. heating the planet.
Yes, sometimes the results are impressive. But so are the mistakes it makes. You can't just trust the stuff it produces. I don't see that my colleagues who use it all the time have any better productivity.
In the past week, I have noticed two things that have "aggravated me" to no end...
Many of my colleagues use a Teams virtual background - I noticed the other day that at least one had the CoPilot logo now injected onto the "blank" wall space behind them... Asked one person if they did that themselves... no, they did not...
Next - NOTEPAD... Yeah, at some point recently that was updated to have a CoPilot button.. No thank-you, that's not why I use notepad... (you can turn it off for now at least)
Today I was using $tool with some dense docs and average forums. In a moment of weakness, I asked an LLM. It said, "just use strpos! :magic_wand:". Of course, strpos didn't exist, but it was in the question for the top result on Google for my problem: "I want to do x with $tool, kind of like strpos in $other_tool". After the impressiveness of the language generation wears off, it's just another bullshit generator and man I've had enough of bullshit these days.
Because people have developed "AI blindness" by now much like they learnt to ignore ad banners 20 years ago and then learned to ignore ANY images on sites when they all got to be AI-generated 2-3 years ago. "AI" is now just a bullshit, meaningless term squeezed in everywhere merely to ask for money for nothing.
This wave of idiocy is going to abate soon. Hopefully.
Insofar as "All ambiguity is resolved by practitioners at the sharp end of the system", a tool that resolves ambiguity higher up the org chart is going to be less impressive to those of us who resolve ambiguity day in and day out. What I mean to say, is that we know our way around the tools and this is why we're not impressed when somebody else uses AI to find their way around the tools.
Of course it is unimpressive when someone does something familiar to you, that is subjective. That same thing could be framed as impressive, to someone else, such as the person who is now able to do the unfamiliar as if it is familiar.
The other bias would be to assume that there aren’t unfamiliar things (you don’t know every tool every made, therefore you may potentially benefit from using AI to help with learning new tools).
Another bias is to assume AI is only good for learning something unfamiliar. There are ways to contain generative coding that scale, in some contexts. Likewise there’s probably use cases even for power users, like organizing messy desktop icons into semantic clusters (automating tedious tasks), summarizing running processes, limiting engagement with brain rot, etc.
This is very well said. The next phase is these "leaders" realizing all that still isn't enough to do meaningful work. Work comes from a deliberate familiarity with tools woven in with the full understanding of the problem. Likewise, a full understanding of the problem comes from familiarity with what's possible.
Anyone who has successfully completed a self-motivated project all on their own knows that the problem definition changes when the knowledge gaps go away. Sometimes even entire classes of solutions become pointless and their value goes from seemingly huge to nothing at all.
If we look into the past it's the difference between wanting to selfishly reconfigure the earth with physics-defying and godlike brute force creating massively worse and life-threatening problems... versus just building the shelter and infrastructure we now consider obvious and far less silly with smarter technology than just brute force and magic.
That's what I think is going to happen to all this pointless scaffolding currently being called "AI". It's a category of software that is completely unnecessary in a world where people are more experienced and better educated.
Imagine a general responding to anti-nuke protestors by wondering aloud how they could fail to be impressed by atom bombs. Imagine what kind of self-absorption that would require.
Every person I consider smart / intellectually engaged, curious, regardless of the age, profession and affiliation with tech companies is actually deeply impressed by LLMs either by themselves or if they encounter what LLMs can do.
LLM bashers either feel threatened by them (most likely they come from creative professions and don’t want to learn new tools - it took years to learn how to draw and suddenly you have nano-banana - that may sting) or just latch on to the hater bandwagon because hey it’s sooo cooool to bash techies - take it nerds. Or engineers who can’t stand the fact that someone can actually leapfrog 5-6 years they spent adapting to new tools.
LLM is a VERY new tool, it’s not easy to master, it requires a change of mindset (it’s not “press a button get a LED light up” type of experience).
People are just lazy - that’s normal.
People who bring up ethics are actually masquerading their laziness with some holier than though posturing that’s inherently empty. The moment it starts benefit them or LLMs get easy enough for them to use they’ll switch their tune.
Name ONE instance where technological progress has been discarded in favor of “ethics”.
5 years later it will be unethical to NOT use LLMs like it’s unethical to submit handwritten documents so that ppl have to suffer through your handwriting - ethics adapt to people not the other way around, otherwise it’s some religious nut job territory.
There is no super smart "AI". An "AI" gives formulaic responses that always follow the same pattern, makes an incredible number of mistakes and claims different things all the time.
The researchers themselves were surprised by the initial public acceptance:
"I will admit, to my slight embarrassment … when we made ChatGPT, I didn't know if it was any good," said Sutskever.
"When you asked it a factual question, it gave you a wrong answer. I thought it was going to be so unimpressive that people would say, 'Why are you doing this? This is so boring!'" he added.
After several weeks of "AI" usage everyone figures out what the researchers knew all along due to their continued exposure.
I made a comment a long time ago explaining this.
Tech enthusiast will judge ai based on what it gets right, we’re interested in what “can” do. Everyone else will judge ai based on where it fails, they are interested in what “problems” it “does” solve.
> a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.
They see: A computer software generally unreliable and unable to accomplish basic tasks
> A computer software generally unreliable and unable to accomplish basic tasks
Yeah specifically to your quote: it's very easy to create some images and video. It's very hard to create exactly what you need if you have specific needs.
It's almost as if content creation is hard! Well that's because it is. You need to know the client, understand their needs, make the content in line with their other visual language etc.
What AI makes easier if for things to look professional. But a real professional doesn't just make it look good but also makes it what you need.
Where AI comes in is as a helper, and for those situations where "good enough" suffices. And there are many of those situations. Many of which would not have had the budget for a real pro to do it anyway.
> Where AI comes in is as a helper
This is where things stop translating well to the real world.
Imagine a pocket calculator:
10 + 33 = 44
Clearly incorrect, then someone tells you
“this one is different, it “helps” you, like 44 is in the ballpark. The real work is now the actual answer”
"its only hitting 43 but they're investing heavily in the product and soon it will be able to hit 44!
give us 100 billion dollars"
I'd dispute this, as I count myself as a tech enthusiast but I'm an enthusiast for tech which works well. I increasingly find myself having to put up with stuff that doesn't work well, and this AI investment instead of fixing the stuff that Windows is routinely doing to make my working day harder is infuriating.
Also, in my experience, it's the non-tech-enthusiasts who are diving into LLMs because they don't understand what is actually going on and it basically looks like a repeat of the whole thing about ELIZA a few decades ago. Just this time it's vastly more expensive and has to run on a datacentre and can write you an essay instead of just rephrasing your question.
People unsurprised by tech CEO being completely disconnected from the realities of 90% of the population
Personally I am unimpressed by the quality of tech CEOs, although unsurprised.
Windows has basically gone from
a. Click on a directory in my File Explorer and it opens immediately, it always shows the correct headers, sorting on any column is nearly instant (up until somewhere in XP probably)
b. Where I am now in Windows 10 sorting can take forever and because I haven't re-installed in ages it refuses to remember folder views and will constantly change them to whatever it wants
c. In the future saying
- "Winny, open folder ABC and sort it by DEF please"
- "Folder ABC deleted, except for def.txt"
- "NO, I said open it, not delete it! Get it back!"
- "Error, folder isn't recoverable"
I am impressed by AI. I just don't want to use it. "Look at how realistic this extruded text looks from a distance!" is definitely an achievement. It just doesn't add to my life in any useful way.
Real question for me is how often do I want to generate a image, video or have a conversation with computer. For me really quite rarely.
And no replacing your customer support with chat bot does not make it better. Just make a damm website with everything I need. Lot less errors, lot simpler for me to do what I want.
>It just doesn't add to my life in any useful way.
Most products don't add value into our lives imo. They are the means by which we get money flowing which is needed to keep the economy alive. Some might argue that they actually subtract from it hence the need for dopaminergic products. The question for the tech CEOs is how to make LLMs reliably dopaminergic in the way Instagram/Tik Tok and the like are.
I've come to be unimpressed by this "money flow" hypothesis to the economy. Kinda sounds like retarded bullshit an accountant monkey came up with.
Have you tried using it as a knowledge retrieval mechanism, I. E., in lieu of Google? Because if not, you really should.
I would invite you to read the other comments on this post by people who have tried to use it for that, and found that it makes regular fundamental errors.
Most of the time you're better off reading a few responses to a given question (on, say, Stack Overflow) and synthesizing your own understanding out of them, rather than taking one that an AI has synthesized for you.
I feel that calling it AI is a big part of the problem.
An LLM parses and generates language exceedingly well. I use LLMs daily now and they are a boon for certain tasks.
An LLM is not an all knowing Oracle. It doesn’t know anything. People who treat the language generator as an authority on anything are fools.
Who is getting fundamental regular errors? I don’t get any.
Do you verify regularly against real documentation/outside sources/subject authorities/question the output? I do, and regularly get wrong information from premier LLMs. I still use it for information retrieval because interrogating large corpora of text and double checking key information can still be faster, but I'm not fully convinced it's long term beneficial for my intellect or knowledge retention.
I need a way to block specific people on HN.
> The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.
The image and video generation capabilities of AI is the most unimpressive part of AI's! It's the LLM's that are the most inpressive. Those might, just might, even make some sense in an OS, since plenty of people are happy to outsource a quick email or script to AI. Hell, what if your OS had a built-in AI to troubleshoot bugs for you? That might even conceivably be an improvement.
> Over in the comments, some users pushed back on the CEO's use of the word "unimpressed," arguing that it's not the technology itself that fails to impress them, but rather Microsoft's tendency to put AI into everything just to appease shareholders instead of focusing on the issues that most users actually care about, like making Windows' UI more user-friendly similar to how it was in Windows 7, fixing security problems, and taking user privacy more seriously.
I'm sure adding AI to Windows would make privacy problems even worse. Not to mention agentic AI could create a whole new class of security problems if not implemented carefully.
Most CEOs nowadays should be replaced asap
VR is just as impressive. Doesn't mean I want my OS to be VR-only going forward.
VR actually helps me get more exercise (Beat Saber), so I'd definitely call it more impressive :P
I am still puzzled by how you can be a human being in this world and not be impressed by people.
The more we study AI, the more we discover one fundamental truth above an the others: people are really, really, really, really impressive.
A human being still absolutely melts an LLM like a Salvador Dali clock if the challenge is to innovate. In the race between human potential and LLM potential it isn't even close. Not a little close. Not remotely close. About as close as we are to each other compared to how close we each are to the sun.
And yet somehow there's always some fucking moron who apparently never considered this for a fraction of a second and runs around screaming that a fucking program will replace a fucking person
Yes people are amazing but they are also tedious, annoying, frustrating, unpredictable and undependable. Finding someone that will flow and synergize with you on a project is like winning the lottery. People love to talk the talk but when it’s time to walk the walk the reality and disappointment sets in. Work with enough people and this painful reality sets in very quickly.
However I don’t want an agentic or “A.I.” operating system.
It's not my problem that it's still challenging to find and work with people. I'm saying that it's where all most valuable the rewards are to be had. Unless I was mistaken and the goal is to do what is easy ...
>Unless I was mistaken and the goal is to do what is easy
I don't know about you but my goal is not to work hard. My goal is maximize results while minimizing effort. You know, work smarter not harder.
https://youtu.be/xO0yuf-ToAk
^ My meme comment for today.
We are impressed by AI, and are using on our tools, what we don't want is to be force feeded into using AI in every single corner of Microsoft products, all of them.
Well yes we dont want you slurping up all our data indiscriminately and feeding some AI with and and thereby enshittifying the OS.
If i need to use AI I will paste text into ChatGPT.com or wherever, i dont need Word to constantly annoy me with it.
Of course this guy is “surprised”, his whole division will be shut down if this fails.
Aside: The cookie consent banner on this website is “unlawful”. I cannot choose to reject the cookies.
LLMs are awesome bullshit generators. Cant see why some CEOs would not be impressed by them.
I am not unimpressed, I hate with passion to click away intrusive ”Use Copilot" prompts in MS products. AI is deeply unethical shit. Stolen content transformed under massive waste of energy, i.e. heating the planet.
Yes, sometimes the results are impressive. But so are the mistakes it makes. You can't just trust the stuff it produces. I don't see that my colleagues who use it all the time have any better productivity.
Every day Teams gives me the choice to 'add Copilot panel', and the alternative is only 'Maybe later'. No simple "No".
In the past week, I have noticed two things that have "aggravated me" to no end...
Many of my colleagues use a Teams virtual background - I noticed the other day that at least one had the CoPilot logo now injected onto the "blank" wall space behind them... Asked one person if they did that themselves... no, they did not...
Next - NOTEPAD... Yeah, at some point recently that was updated to have a CoPilot button.. No thank-you, that's not why I use notepad... (you can turn it off for now at least)
Today I was using $tool with some dense docs and average forums. In a moment of weakness, I asked an LLM. It said, "just use strpos! :magic_wand:". Of course, strpos didn't exist, but it was in the question for the top result on Google for my problem: "I want to do x with $tool, kind of like strpos in $other_tool". After the impressiveness of the language generation wears off, it's just another bullshit generator and man I've had enough of bullshit these days.
Because people have developed "AI blindness" by now much like they learnt to ignore ad banners 20 years ago and then learned to ignore ANY images on sites when they all got to be AI-generated 2-3 years ago. "AI" is now just a bullshit, meaningless term squeezed in everywhere merely to ask for money for nothing.
This wave of idiocy is going to abate soon. Hopefully.
Insofar as "All ambiguity is resolved by practitioners at the sharp end of the system", a tool that resolves ambiguity higher up the org chart is going to be less impressive to those of us who resolve ambiguity day in and day out. What I mean to say, is that we know our way around the tools and this is why we're not impressed when somebody else uses AI to find their way around the tools.
There are some potential biases here.
Of course it is unimpressive when someone does something familiar to you, that is subjective. That same thing could be framed as impressive, to someone else, such as the person who is now able to do the unfamiliar as if it is familiar.
The other bias would be to assume that there aren’t unfamiliar things (you don’t know every tool every made, therefore you may potentially benefit from using AI to help with learning new tools).
Another bias is to assume AI is only good for learning something unfamiliar. There are ways to contain generative coding that scale, in some contexts. Likewise there’s probably use cases even for power users, like organizing messy desktop icons into semantic clusters (automating tedious tasks), summarizing running processes, limiting engagement with brain rot, etc.
> organizing messy desktop icons into semantic clusters
stunning stuff
This is very well said. The next phase is these "leaders" realizing all that still isn't enough to do meaningful work. Work comes from a deliberate familiarity with tools woven in with the full understanding of the problem. Likewise, a full understanding of the problem comes from familiarity with what's possible.
Anyone who has successfully completed a self-motivated project all on their own knows that the problem definition changes when the knowledge gaps go away. Sometimes even entire classes of solutions become pointless and their value goes from seemingly huge to nothing at all.
If we look into the past it's the difference between wanting to selfishly reconfigure the earth with physics-defying and godlike brute force creating massively worse and life-threatening problems... versus just building the shelter and infrastructure we now consider obvious and far less silly with smarter technology than just brute force and magic.
That's what I think is going to happen to all this pointless scaffolding currently being called "AI". It's a category of software that is completely unnecessary in a world where people are more experienced and better educated.
Just shows how out of touch they are. So the AI bubble seems to be composed of smaller bubbles around individual CEOs.
AI bubble wrap or AI insulating foam?
The bubble will pop eventually. So bubble wrap it is. Insulating foam doesn't really "pop"
Imagine a general responding to anti-nuke protestors by wondering aloud how they could fail to be impressed by atom bombs. Imagine what kind of self-absorption that would require.
The power of (self-)endoctrination in action
Every person I consider smart / intellectually engaged, curious, regardless of the age, profession and affiliation with tech companies is actually deeply impressed by LLMs either by themselves or if they encounter what LLMs can do.
LLM bashers either feel threatened by them (most likely they come from creative professions and don’t want to learn new tools - it took years to learn how to draw and suddenly you have nano-banana - that may sting) or just latch on to the hater bandwagon because hey it’s sooo cooool to bash techies - take it nerds. Or engineers who can’t stand the fact that someone can actually leapfrog 5-6 years they spent adapting to new tools.
LLM is a VERY new tool, it’s not easy to master, it requires a change of mindset (it’s not “press a button get a LED light up” type of experience).
People are just lazy - that’s normal.
People who bring up ethics are actually masquerading their laziness with some holier than though posturing that’s inherently empty. The moment it starts benefit them or LLMs get easy enough for them to use they’ll switch their tune.
Name ONE instance where technological progress has been discarded in favor of “ethics”.
5 years later it will be unethical to NOT use LLMs like it’s unethical to submit handwritten documents so that ppl have to suffer through your handwriting - ethics adapt to people not the other way around, otherwise it’s some religious nut job territory.
Yawn, h8rs gonna h8, progress gonna progress.
There is no super smart "AI". An "AI" gives formulaic responses that always follow the same pattern, makes an incredible number of mistakes and claims different things all the time.
The researchers themselves were surprised by the initial public acceptance:
https://www.businessinsider.com/chatgpt-was-inaccurate-borin...
"I will admit, to my slight embarrassment … when we made ChatGPT, I didn't know if it was any good," said Sutskever.
"When you asked it a factual question, it gave you a wrong answer. I thought it was going to be so unimpressive that people would say, 'Why are you doing this? This is so boring!'" he added.
After several weeks of "AI" usage everyone figures out what the researchers knew all along due to their continued exposure.
AI is at odds with humanity. Everyone understands that.