Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nobody argues that LLMs aren't useful for some bulk processing of billion datapoints or looking for obscure correlations in the unedited data. But the premise of the Gwern's article is that to be considered thinking, LLM must initiate such search on it's own and arrive to a novel conclusion on it's own.

Basically if:

A) Scientist has an idea > triggers LLM program to sift through a ton of data > LLM print out correlation results > scientist read them and proves/disproves an idea. In this case, while LLM did a bulk of work here, it did not arrive at a breakthrough on its own.

B) LLM is idling > then LLM triggers some API to get some specific set of data > LLM correlates results > LLM prints out a complete hypothesis with proof (or disproves it). In this case we can say that LLM did a breakthrough.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: