Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well..... tbf. Each approach has hit a wall. It's just that we change things a bit and move around that wall?

But that's certainly not a nuanced / trustworthy analysis of things unless you're a top tier researcher.



> expert in human language development and cognitive neuroscience, Gary is a futurist able to accurately predict the challenges and limitations of contemporary AI

I'm struggling to reconcile how these connect and he has been installed as Head of AI at Uber. Reeks of being a huckster


I didn't know the Uber bit, but googling:

>...held the position briefly after Uber acquired his company, Geometric Intelligence, in late 2016. However, Marcus stepped down from the directorship in March 2017,

which maybe fits your hypothesis.


Indeed. A mouse that runs through a maze may be right to say that it is constantly hitting a wall, yet it makes constant progress.

An example is citing Mr Sutskever's interview this way:

> in my 2022 “Deep learning is hitting a wall” evaluation of LLMs, which explicitly argued that the Kaplan scaling laws would eventually reach a point of diminishing returns (as Sutskever just did)

which is misleading, since Sutskever said it didn't hit a wall in 2022[0]:

> Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling

The larger point that Mr Marcus makes, though, is that the maze has no exit.

> there are many reasons to doubt that LLMs will ever deliver the rewards that many people expected.

That is something that most scientists disagree with. In fact the ongoing progress on LLMs has already accumulated tremendous utility which may already justify the investment.

[0]: https://garymarcus.substack.com/p/a-trillion-dollars-is-a-te...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: