Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, but that space is entirely derived from human expressions, in words, of their own thought space. The LLM has no direct training access to the humans’ thoughts like it does to their words. So if it does have comparable thought space, that would imply such a space can be reconstructed accurately after passing through expression in words, which seems like a unsupported claim based on millennia of humans having trouble understanding each others’ thoughts based on verbal communication, and students writing essays that are superficially similar to the texts they’ve read, but clearly indicate they haven’t internalized the concepts they were supposedly learning.

It’s not to say there couldn’t be a highly multimodal and self-training model that developed a similar thought space, which would be very interesting to study. It just seems like LLMs aren’t enough.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: