Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

...to AI. It's kinda funny how this is yet another area where these models suck very much in the same way that most humans do. LLMs are bad at arithmetic? So are most people. Can't tell science from babble? I already wouldn't ask a non-expert to rate any aspect of an academic paper. Trusting the average Joe who has only completed some basic form of education would be tremendously stupid. Same with these models. Maybe we can get more out of it in specific areas with fine tuning, but we're very far away from a universal expert system.


The best was the Ted Chiang article making numerous category errors and forest/trees mistakes in arguing that LLMs just store lossy copies of their training data. It was well-written, plausible, and so very incorrect.


Neural network based compression algorithms[1] are a thing, so I believe Ted Chiang's assessment is right. Memorization (albeit lossy) is also how the human brain works and develops reasoning[2].

[1] https://bellard.org/nncp/

[2] https://www.pearlleff.com/in-praise-of-memorization


The fact that some neural network architectures can compress data does not mean that data compression is the only thing any neural network can do.

It’s like saying that GPUs can render games, so GPT is a game because it uses GPU.


I felt the same way. But I’d love to read a specific critique. Have you seen one?


Here’s one from a researcher (which also links to another), though I’m not qualified to assess it’s content in depth.

https://twitter.com/raphaelmilliere/status/16240731504754319...


I mean, humans can't distinguish AI written text either - which is why this tool was built?

I don't see how it will be possible to build such a tool either as the combination of words that can come after another is finite.


I do agree that the most likely reason is that scientific papers tend to be highly formulaic and follow strict structures, so a LLM is be able to generate something much more alike to human writing than if it tries to generate narrative.

But it's still fun to deduce that the reason is that the quality of technical writing has sunk so low, that is even below the standards for AI generated text.


>...to AI.

Perhaps we can call it the "Synthromorphic principle," the bias of AI agents to project AI traits onto conversants that are not in fact AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: