Have you considered perhaps that you are, indeed, out of your mind? Or more precisely, that you could be rationalizing what is essentially a random process?
Based on the discussions here it seems that every model is either about to be great or was great in the past but now is not. Sucks for those of us who are stuck in the now, though.
The first comment claims that Anthropic "are having to quantise the models to keep up with demand", to which the parent comment agrees with "This can't be understated". So based on this discussion so far Anthropic has [1] great models, [2] models that used to be great but now aren't due to quantization, [3] models that used to be great but now aren't due to a bug, and [4] models that constantly feel like a "bait and switch".
This most definitely feels like people analyzing the output of a random process - at this point I am feeling like I'm losing my mind.
(As for the phrasing I was quoting the OP, who I believe took it in the spirit in which it was meant)
I am not sure why you are loosing your mind
Anthropic dynamically adjusts knobs based on capacity and load
Those knobs can be as simple as reducing usage limits to more advanced like switching to more optimized paths that have anything from more aggressive caching to using more optimized models etc. Bugs are a factor in quality of any service.
> Suggesting people are "out of their mind" is not really appropriate on this forum, especially so in this circumstance.
They were wrong, but not inappropriate. They re-used the "out of their mind" phrase from the parent comment to cheekily refer to the possibility of a cognitive bias.
It seems plausible enough that they're trying to squeeze as much out of their hardware as possible and getting the balance wrong. As prices for hardware capable of running local LLMs drop and local models improve, this will become less prevalent and the option of running your own will become more widespread, probably killing this kind of service outside of enterprise. Even if it doesn't kill that service, it'll be _considerably_ better to be operating your own as you have control over what is actually running.
On that note, I strongly recommend qwen3:4b. It is _bonkers_ how good it is, especially considering how relatively tiny it is.
"that every model is either about to be great or was great in the past but now is not"
FWIW, Codex-CLI w/ ChatGPT5 medium is great right now. Objectively accelerating me. Not a coding god like some posters would have it, but overall freeing up time for me. Observably.
Assuming I haven't had since-cured delusions, the same was true for Claude Code, but isn't any more.
Concrete supporting evidence: From time to time, I have coding CLIs port older projects of varying (but small-ish) sizes from JS to TS. Claude Code used to do well on that. Repeatedly. I did another test last Sunday, and it dug a momentous hole for itself that even liberal sprinkling of 'as unknown' everywhere couldn't solve. Codex managed both the ab-initio port and was able to undig from CC's massive hole abandoned mid-port.
So I'd say the evidence points somewhat against random process, given repeated testing shows clear signal both of past capability and of recent loss of capability.
The idea that it's a "random" process is misguided.
>Or more precisely, that you could be rationalizing what is essentially a random process?
You mean like our human brains and our entire bodies? We are the result of random processes.
>Sucks for those of us who are stuck in the now, though
I don't know what you are doing- but GPT5 is incredible. I literally spent 3 hours last night going back and forth on a project where I loaded some files for a somewhat complicated and tedious conversion between two data formats. And I was able to keep going back and forth and making the improvements incrementally and have AI do 90% of the actual tedious work.
To me it's incredible people don't seem to understand the CURRENT value. It has literally replaced a junior developer for me. I am 100% better off working with AI for all these tedious tasks than passing them off to someone off. We can argue all day if that's good for the world (it's not) but in terms of the current state of AI- it's already incredible.
It might not be a junior dev tool. Senior devs are using AI quite differently to magnify themselves not help them manage juniors with developing ceilings.
Based on the discussions here it seems that every model is either about to be great or was great in the past but now is not. Sucks for those of us who are stuck in the now, though.