This isn’t true for 99% of the prompts that actual businesses are engineering and using. Your typical user is clueless and just needs a prompt that “translates” their ad hoc, often ambiguous, questions into clearly specified tasks that the AI understands without ambiguity and states the underlying assumptions the user was taking for granted. Such prompts generalize well to any language model. You’re foolish and a bad prompter engineer if you engineer niche prompts that don’t generalize well and are specific to just one model—you can engineer a prompt that generalize and accomplishes the same thing. Prompt Engineering isn’t dead—we just have a new “lesson learned” for the previously naive. Learn it!
Not sure why people are so quick to disagree with this. The premise is roughly true. If we build prompt architectures (which can be massive) on a given model (say gpt4o) and then that model is just removed/no longer available, the entire architecture must be reworked for the newest model. What if Python changed its syntax every 6 months to a year and all Python code in production would only work if it was updated to the latest Python? That’s the issue… even if we maintain access to and use legacy models, they might be super expensive or gpu inefficient or slow, etc compared to new models. It’s very tumultuous ground to construct on.
This isn’t true for 99% of the prompts that actual businesses are engineering and using. Your typical user is clueless and just needs a prompt that “translates” their ad hoc, often ambiguous, questions into clearly specified tasks that the AI understands without ambiguity and states the underlying assumptions the user was taking for granted. Such prompts generalize well to any language model. You’re foolish and a bad prompter engineer if you engineer niche prompts that don’t generalize well and are specific to just one model—you can engineer a prompt that generalize and accomplishes the same thing. Prompt Engineering isn’t dead—we just have a new “lesson learned” for the previously naive. Learn it!
Not sure why people are so quick to disagree with this. The premise is roughly true. If we build prompt architectures (which can be massive) on a given model (say gpt4o) and then that model is just removed/no longer available, the entire architecture must be reworked for the newest model. What if Python changed its syntax every 6 months to a year and all Python code in production would only work if it was updated to the latest Python? That’s the issue… even if we maintain access to and use legacy models, they might be super expensive or gpu inefficient or slow, etc compared to new models. It’s very tumultuous ground to construct on.
Just use DPSy.
didnt prove anything. prompt engineering still works with GPT-5. dont know what your experience is about...
This looks AI written. It's full of AI writings tropes and, the big telltale sign: it's a lot of words for saying very little.
Obviously, you have to rewrite prompts for different models.
If you are really dependent on a single one; then better be sure it's an open-source copy you can run yourself.
Prompt engineering died like a year ago
https://blog.big-picture.com/en/prompt-engineering-is-dead-i...