> LLMs are an important stepping stone on the way to AGI, in which case OpenAI is in a great position as the company with the best LLM
We don't actually know this though. Assuming an AGI hasn't yet been developed, we don't know whether LLMs will actually get us there. We know they seem to have more use than previous ML systems, but until we have an AGI we can't say what will get us there.
Further, are we really assuming that developing AGI is either a shared goal or a given regardless of what people would actually want to happen? It sounds like we agree on the fundamental impacts an AGI would have on our current societal structures, do we as a society not get a say in the change? Have we effectively blessed a handful of people working in the private sector to make that decision for everyone? And if so, when do we grapple with the moral questions, like whether an AGI has rights similar to humans, or if unplugging one is murder, etc?
I think you misunderstood my comment. I gave those two bullet points as mutually exclusive options, where one of them must, pretty much by definition, be true.
That is, you responded "We don't actually know this though. Assuming an AGI hasn't yet been developed, we don't know whether LLMs will actually get us there." Exactly, we don't know if this is true, in which case if it's false my second bullet point "LLMs turn out to be a 'local maximum' in the search for AGI." is the true statement.
I guess I'm not quite sure how those are the two options. For the second scenario, if LLMs are the local maximum and hit a wall, OpenAI would only be in a good spot if the challenges hit don't invalidate core differentiators of their company.
For example, if GPU-based systems are the limiting factor they wouldn't have the edge. If the problem turns out to be in the human skills and background needed to develop an AGI they similarly wouldn't have the advantage.
Wouldn't there have to be a third scenario, where they walked down a path that doesn't pan out at all and requires a fundamental rethink effectively going back to square one?
If human skills behind the model are a limiting factor, then OpenAI is in a pretty good position: they have a headstart, and software tends towards "winner takes all" because of the low marginal cost. Look at Google search for a great example of this in a related (perhaps even the same...) market.
Of course, a pretty good position doesn't guarantee winning, but GP didn't claim that.
But yeah, there are probably outcomes in which LLMs are a local maximum and ultimately dead end, and OpenAI will have a hard time holding on because the market turns out more competitive. And somebody might beat them to whatever the next important invention is. We'll see.
We don't actually know this though. Assuming an AGI hasn't yet been developed, we don't know whether LLMs will actually get us there. We know they seem to have more use than previous ML systems, but until we have an AGI we can't say what will get us there.
Further, are we really assuming that developing AGI is either a shared goal or a given regardless of what people would actually want to happen? It sounds like we agree on the fundamental impacts an AGI would have on our current societal structures, do we as a society not get a say in the change? Have we effectively blessed a handful of people working in the private sector to make that decision for everyone? And if so, when do we grapple with the moral questions, like whether an AGI has rights similar to humans, or if unplugging one is murder, etc?