...to AI. It's kinda funny how this is yet another area where these models suck very much in the same way that most humans do. LLMs are bad at arithmetic? So are most people. Can't tell science from babble? I already wouldn't ask a non-expert to rate any aspect of an academic paper. Trusting the average Joe who has only completed some basic form of education would be tremendously stupid. Same with these models. Maybe we can get more out of it in specific areas with fine tuning, but we're very far away from a universal expert system.
The best was the Ted Chiang article making numerous category errors and forest/trees mistakes in arguing that LLMs just store lossy copies of their training data. It was well-written, plausible, and so very incorrect.
Neural network based compression algorithms[1] are a thing, so I believe Ted Chiang's assessment is right. Memorization (albeit lossy) is also how the human brain works and develops reasoning[2].
I do agree that the most likely reason is that scientific papers tend to be highly formulaic and follow strict structures, so a LLM is be able to generate something much more alike to human writing than if it tries to generate narrative.
But it's still fun to deduce that the reason is that the quality of technical writing has sunk so low, that is even below the standards for AI generated text.