Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM by itself can only regurgitate reasoning and/or reasoning steps from the training set, but I think that adding search on top of it gets you much closer. You're basically talking about the possibility of searching through all possible sequences of reasoning steps that an LLM could generate, and picking the sequence that actually works. DeepMind did it for AlphaGo and AlphaFold - I would not bet against them.

When Hassabis says it'll take 5000+ people 5-10 years and maybe a couple of Transformer-level breakthroughs, then he's clearly not talking about just adding search, and everyone realizes that things like continual/incremental learning are also required. I'd guess that Hassabis has a pretty good idea of what's missing from LLMs and needs to be added to get to AGI.

I really would not dismiss Hassabis. He is a lot smarter than you or I, and has won a Nobel Prize for his application of AI to science, as well as stunning the machine learning community with AlphaGo (most had thought beating Go would take another 10 years).

I think there are better long term approaches to super-human AI than building upon LLMs, but just as a cog is not a car, there is no reason it can't be a part of a car.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: