There was a time when ChatGPT actually worked. When it understood nuance, remembered context, and delivered content exactly the way you asked for i...
              
        
    
  For further actions, you may consider blocking this person and/or reporting abuse
    
Powerful post. If you’re open to it, could you share 2–3 “before vs after” prompts where the exact same input now fails (formatting, markdown fidelity, or refusal)? I’ll run them end-to-end and add confirmations to your tracker. The more reproducible cases we collect, the harder this is to ignore.
Thanks Shemith, I’ll compile a few “before vs after” markdown examples and add them to the tracker shortly. You’re right, reproducibility will help make the issue visible.
It keeps rewriting my prompts as if it knows better. Who approved this nonsense?
That’s what happens when alignment outweighs usability. The model assumes safety equals intelligence, but it’s actually censorship wrapped in code.
To be fair, public Gemini isn't much better. I find Google ai studio chat to be great, and Grok if I want recent+odd
I agree that Gemini and even Grok are still inconsistent. AI Studio has been surprisingly stable though.
It apologizes after every mistake and then repeats it again. It’s like talking to a wall that learned manners.
Spot on. Politeness without precision is useless. Real intelligence means learning, not looping apologies.
I also have experienced the same. I happily use venice.ai and it's getting better and better.
I'm also liking grok improvements.
I use grok for reasoning on ideas (often with voice communication) and venice.ai for anything deep and technical
Not sure what tool you are using. I use chatGpt tremendously successfully for everything from tech to health / fitness to construction to culture , general topics, psychology, and it is a great verification tool for my approaches on anything and everything.
I stopped using ChatGPT because of a catch 22 scenario. I was looking for models to run LLM-NLP with RAG scripts as part of a series of AI learning projects. When I tried to use some of the suggested models access was blocked and I found I had to go to the OpenAI site and create keys to access them. The OpenAI site was clearly designed for commercial pay for use. Very similar to the cloud based Azure, AWS, Redhat and so on sites. The rules for access changed day to day implying that there would be no effort to open source or share for academic purposes. Microsoft is also the not so silent partner here. I decided to delete my account with OpenAI and ChatGPT told me I could not do that without closing my ChatGPT account. So I said goodbye to ChatGPT and now use DeepSeek. Microsoft pushes CO Pilot now in both Visual Studo and Visual Code inserting itself into the first line of every .NET file but I usually cut and paste a template anyway over the first line. I respect AI model training and implementation but there is so much AI activity that is just bad or a facade. Globalization is the real threat and destroys economies.
Interesting take and a familiar pain point for many developers. The shifting access rules and commercial focus around major AI providers have definitely made experimentation harder and pushed a lot of people toward fully controlled ecosystems.
I do think the core issue is less about AI itself and more about centralization. The gap between proprietary models and open source alternatives is closing fast, and more developers are moving to options like DeepSeek, Llama or Mistral to avoid lock-in and regain control over cost and data.
Your point about AI tools being mostly a façade is valid too. A lot of products today are just a UI on top of an API call without real technical value. Over time the market will separate the hype from the tools that actually solve problems.
Curious how your workflow with DeepSeek is going so far in terms of latency, cost control and data handling now that you’re outside the OpenAI ecosystem.
Right about the time when "open" in OpenAI no longer meant non-profit . . .
It refuses to follow simple formatting rules. I ask for markdown, and it dumps random HTML. Every time.
You’re describing the same behavior I see daily 🧐
OpenAI keeps saying it’s getting smarter. But smarter for who?
Good question. Smarter should mean more useful, not more restricted. Until they relearn that difference, this decline will continue.
True, I downgraded my subscription and rarely use ChatGPT, just for 1st time creative ideas. Gemini, Claude, perplexity are more accurate. On the voice front, sesame.ai is doing better than chatgpt!