Just because you're not writing code where you can see that the new models are appreciably better doesn't mean they aren't. LLM progress now isn't in making it magically appear smarter at the top end (that's in diminishing returns as you imply), but at filling in weak points in knowledge, holes in capability, improving default process, etc. That's relevant because it turns out most of the time the LLM doesn't fail at coding because it's not a general super genius, but because it just had a hole in its capabilities that caused it to be dumb in a specific scenario.
Additionally, while the intelligence floor is shooting up and the intelligence ceiling is very slowly rising, the models are also getting better at following directions, writing cleaner prose, and their context length support is increasing so they can handle larger systems. The progress is still going strong, it just isn't well represented by top line "IQ" style tests.
LLMs and humans are good at dealing with different kinds of complexity. Humans can deal with messy imperative systems more easily assuming they have some real world intuition about it, whereas LLMs handily beat most humans when working with pure functions. It just so happens that messy imperative systems are bad for a number of reasons, so the fact that LLMs are really good at accelerating functional systems gives them an advantage. Since functional systems are harder to write but easier to reason about and test, this directly addresses the issue of comprehending code.
Additionally, while the intelligence floor is shooting up and the intelligence ceiling is very slowly rising, the models are also getting better at following directions, writing cleaner prose, and their context length support is increasing so they can handle larger systems. The progress is still going strong, it just isn't well represented by top line "IQ" style tests.
LLMs and humans are good at dealing with different kinds of complexity. Humans can deal with messy imperative systems more easily assuming they have some real world intuition about it, whereas LLMs handily beat most humans when working with pure functions. It just so happens that messy imperative systems are bad for a number of reasons, so the fact that LLMs are really good at accelerating functional systems gives them an advantage. Since functional systems are harder to write but easier to reason about and test, this directly addresses the issue of comprehending code.