I saw the significant difference in gpt5. If someone were using mostly or just gpt4 before, then it might be a culture shock of difference type situation.
Me who actively uses claude, gemini, perplexity, and a whole gamut of local LLMs.
The personality of the models are different and so when gpt5 came along, it wasnt really a surprise to me.
GPT-5 was an upgrade for investors. The primary feature of it is to use a router that will decide between a stronger model and a weaker one for a given query. The goal is to reduce operating costs without regard to improving the user experience, while they market it as "new and improved".
Another pet peeve is that it, when asked to provide several possible solutions, sometimes generates two that are identical but with different explanations.
Ah yes, I've had similar experiences actually. Also the variant where I ask it to provide an alternate solution/answer to the one it gave, where it than proceeds to basically regurgitate its previous answer with slight stylistic (i.e. maintaining content parity) modifications.
I saw the significant difference in gpt5. If someone were using mostly or just gpt4 before, then it might be a culture shock of difference type situation.
Me who actively uses claude, gemini, perplexity, and a whole gamut of local LLMs.
The personality of the models are different and so when gpt5 came along, it wasnt really a surprise to me.
GPT-5 was an upgrade for investors. The primary feature of it is to use a router that will decide between a stronger model and a weaker one for a given query. The goal is to reduce operating costs without regard to improving the user experience, while they market it as "new and improved".
"Ahhh, you're right, now it's clear, that explains it....."
Yeah, you're not alone. I've even been getting responses with contradictions within the same sentence. ("X won't work therefore you should use X")
You are not alone.
Another pet peeve is that it, when asked to provide several possible solutions, sometimes generates two that are identical but with different explanations.
Ah yes, I've had similar experiences actually. Also the variant where I ask it to provide an alternate solution/answer to the one it gave, where it than proceeds to basically regurgitate its previous answer with slight stylistic (i.e. maintaining content parity) modifications.
Hallucinations are definitely up at least 5X compared with gpt4 from my personal experience.
It's just you