Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe an important reason for why there are no LLM breakthroughs is that humans make progress in their thinking through experimentation, i.e. collecting targeted data, which requires exerting agency on the real world. This isn't just observation, it's the creation of data not already in the training set.


Maybe also the fact that they can't learn small pieces of new information without "formatting" its whole brain again, from scratch. And fine tuning is like having a stroke, where you get specialization by losing cognitive capabilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: