> expert in human language development and cognitive neuroscience, Gary is a futurist able to accurately predict the challenges and limitations of contemporary AI
I'm struggling to reconcile how these connect and he has been installed as Head of AI at Uber. Reeks of being a huckster
>...held the position briefly after Uber acquired his company, Geometric Intelligence, in late 2016. However, Marcus stepped down from the directorship in March 2017,
Indeed. A mouse that runs through a maze may be right to say that it is constantly hitting a wall, yet it makes constant progress.
An example is citing Mr Sutskever's interview this way:
> in my 2022 “Deep learning is hitting a wall” evaluation of LLMs, which explicitly argued that the Kaplan scaling laws would eventually reach a point of diminishing returns (as Sutskever just did)
which is misleading, since Sutskever said it didn't hit a wall in 2022[0]:
> Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling
The larger point that Mr Marcus makes, though, is that the maze has no exit.
> there are many reasons to doubt that LLMs will ever deliver the rewards that many people expected.
That is something that most scientists disagree with. In fact the ongoing progress on LLMs has already accumulated tremendous utility which may already justify the investment.
But that's certainly not a nuanced / trustworthy analysis of things unless you're a top tier researcher.