Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is an argument that Altman's statement is just trying to distract competitors from outspending OpenAI. Prior to GPT-4 there was no indications that there are diminishing returns (at least on a log scale).

The tremendous progress over the last year makes me vary of your statement that progress will stop coming from model size improvements.



>There is an argument that Altman's statement is just trying to distract competitors from outspending OpenAI

As if competitors, say Google, will take a competitor at his words and say "damn, let's scrap the expansion plans, then"?

That argument sounds highly implausible.

>The tremendous progress over the last year makes me vary of your statement that progress will stop coming from model size improvements.

Isn't "tremendous progress" before the dead-end always the case with diminishing returns and low hanging fruits?


I don't think it is implausible. If engineers come to management at Google and ask for 4 bn to do a moonshoot 6 month AI training run, then such a smoke screen statement can be highly effective. Even if they delay their plans for 4 weeks to evaluate the scaling first, it is another 4 weeks headstart for OpenAI.

Also not everyone can bring 500m and more to the table to train a big model in the first place.

> tremendous progress

There are things which just seem to scale and others which don't. So far it seems that adding more data and more compute don't seem to flatten out that much.

At least we should give it another year to see where it leads us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: