I was fairly skeptical all the way until a month ago, when I got access to Github Copilot's preview of ChatGPT5 agent, it writes code, tries it out, and fixes any problems. Up until last month, it was letting the LLM write code, then have errors, and spend forever trying to figure out the code, or figuring out how to get the LLM to fix it (usually making it worse each time!).
I had GPT-3 access at the time. GPT-3 was a dick. ChatGPT was the same guy in a suit and customer service. OpenAI put a ton of guardrails on it because they didn't trust it. They overtrained and cut out a lot though, and the good stuff was in the API. People thought of AI as rigid and boring, but unbarred, it was a fever dream. Great for brainstorming because it went into places the human brain wouldn't go.
Feel free to go through the history, there's some stuff from before 3.5.
My GitHub desc was there to fill a disclaimer requirement by OpenAI for any AI generated code using their API. I keep it for historical reasons.
I kept the good stuff private though. At the time I was pissed with Stack Overflow and had hacked together my own personal version which I wired to my terminal. All I got from SO at the time was being closed for being a duplicate of an unrelated question, so AI was a large step ahead.
I don't know about most devs, but I wasn't surprised at all. But then, I actively work on deep learning (not LLM) systems, so I'm probably more in tune with developments in this stuff than most.
no, I got access to GPT-2 and realized the way things were going it was going to be big and things were changing. I couldn't predict the timing but GPT-2 felt different.
No. They didn't code very well back then, they just pattern matched and regurgitated whatever code snippet was in some GitHub repo that matched your prompt best. And that's what they're doing now, too.
They're useful tools when used intelligently, and they can have their moments of surprising utility, but by an large they're like really fancy boilerplate generators but with far less accuracy and reliability.
I was fairly skeptical all the way until a month ago, when I got access to Github Copilot's preview of ChatGPT5 agent, it writes code, tries it out, and fixes any problems. Up until last month, it was letting the LLM write code, then have errors, and spend forever trying to figure out the code, or figuring out how to get the LLM to fix it (usually making it worse each time!).
I had GPT-3 access at the time. GPT-3 was a dick. ChatGPT was the same guy in a suit and customer service. OpenAI put a ton of guardrails on it because they didn't trust it. They overtrained and cut out a lot though, and the good stuff was in the API. People thought of AI as rigid and boring, but unbarred, it was a fever dream. Great for brainstorming because it went into places the human brain wouldn't go.
Some publicly released stuff I had at the time: https://github.com/smuzani/openai-samples
Feel free to go through the history, there's some stuff from before 3.5.
My GitHub desc was there to fill a disclaimer requirement by OpenAI for any AI generated code using their API. I keep it for historical reasons.
I kept the good stuff private though. At the time I was pissed with Stack Overflow and had hacked together my own personal version which I wired to my terminal. All I got from SO at the time was being closed for being a duplicate of an unrelated question, so AI was a large step ahead.
I don't know about most devs, but I wasn't surprised at all. But then, I actively work on deep learning (not LLM) systems, so I'm probably more in tune with developments in this stuff than most.
no, I got access to GPT-2 and realized the way things were going it was going to be big and things were changing. I couldn't predict the timing but GPT-2 felt different.
No. They didn't code very well back then, they just pattern matched and regurgitated whatever code snippet was in some GitHub repo that matched your prompt best. And that's what they're doing now, too.
They're useful tools when used intelligently, and they can have their moments of surprising utility, but by an large they're like really fancy boilerplate generators but with far less accuracy and reliability.