GPT is insufficient for AGI. It might be part of the solution one day, but it's fundamentally incapable of bridging the gap by itself.
Despite what Altman/OpenAI says in public, I think he knows this too... That's why they dissolved the Superalignment safety team; and dropping the AGI clause just confirms that.
As time goes on, the "AGI is coming" hype machine gets harder to sustain... and their work/company will begin to be valued on the actual value it produces.
What frustrates me the most about OpenAI is that as recently as this summer they were talking non-stop about how gigantic $100 billion models are all you need for AGI and that it’s just a matter of time until we reach this scale. And if you didn’t see this you’re a simpleton who doesn’t understand how exponential curves work.
And then all of the sudden o1 comes out and the narrative from them has shifted entirely. “Obviously massive models aren’t enough to get to AGI, but all we have to do is scale up inference time compute and we’ll get there!” And if you don’t see this you’re just a simpleton.
I wish that OpenAI was called out for this shift more often. Because I haven’t heard even one of their employees acknowledge this. At some point you have to just ignore them until they start actually publishing science that supports their beliefs, but that won’t happen because it doesn’t generate revenue.
> that won’t happen because it doesn’t generate revenue.
OpenAI made real progress towards a computational understanding of human language and cognition. I'm sorry they have become a for-profit entity (the paperwork lags behind reality, of course). A fiduciary duty does not serve humanity. The quality and credibility of their communications have fallen dramatically.
> how gigantic $100 billion models are all you need for AGI and that it’s just a matter of time until we reach this scale. And if you didn’t see this you’re a simpleton who doesn’t understand how exponential curves work.
OpenAI made no such claim. LLM stans on the internet definitely made such claim, that the Stargate project would be AGI and whatnot. But, like crypto bros, GPT hyperfans are just constantly deluded/lying so you shouldn't project their claims onto the corporations they simp for.
That being said, Anthropic's CEO made a claim closer to what you're saying, that a 10-100 billion dollars model would be better than a human in almost every way.
America gave a trillion dollars out so a lot of people can have 1500 dollars. We have enough money, and we are all not going to live forever. I don’t know what the hold up is.
It’s not an exact figure, but plenty of employees claimed that after we train a model with 2-3 orders of magnitude of compute over GPT-4 we’d reach AGI, which puts us at $10-$100 billion.
See Situational Awareness for one example of this.
This summer he made a comment saying "we can make GPTs 5,6,7 more powerful" and someone editorialized it as being "via parameter count/scale" which he didn't say.
Yep. The current wave of "AI" startups are going to totally burn through the terms "AI" and "AGI", with one of two possible outcomes:
* We enter another "AI Winter" of sorts. People will continue to work on LLMs and other deep learning applications because they're legitimately useful in moderation, but they'll distance themselves from the label "AI" or "AGI" because it will start sounding like "I work on blockchain" does today.
* Both terms will end up completely watered down, with "AI"/"AGI" becoming a catch-all to mean "a computer algorithm of some kind does some magic the user doesn't fully understand"—the new, hipper word for "software". OpenAI will declare victory and we'll have to come up with a new term for an artificial intelligence that can actually do the stuff that AGI was supposed to do.
What you describe are two different speeds at which AI/AGI goes through Gartner's hype cycle.
An interesting question is: why do we need hype cycles? Both in general, and what's peculiar about AI and AGI... most ideas don't go through multiple hype cycles. Wasn't one "AI Winter" enough?
Tech hype is a always a kind of hyper-stition, it aims to be a self-fulfilling prophecy. So, tech hype is directed mostly toward those that can decide to invest in tech development to make it fulfill its bright future vision, achieve its destiny.
But with AI/AGI, I think it's different. The impact of tech hype on the now is as important as the bright future that this hype foretell.
AGI tech hype sells future marvels of productivity to the employers and executives,
it creates fear for employment in employees and thus it is a form of social control and curbs HR costs and demands in the present moment; and it also has socio-political function as a way for USA, and to a lesser extend to the West at large, to reassert its manifest destiny as the center of world, both in tech prowess but also in terms of cultural hegemony.
The enormous AI datacenter build up in USA creates both first-mover advantages and a first-mover disadvantages. We'll see how it plays out in the next few years.
I don't really see that myself. The exponential growth in computing will continue and while AGI is not very defined, people mostly think of it as human like or better intelligence, and it'll probably happen regardless of the startups. Though maybe spread over a period of time, for different abilities.
There are other AI models already existing that they could splice on. Bit like they've done with o1. I'm not sure that'll get to AGI but it's hard to say it definitely won't either.
For every hype there is the anti-hype, as can be seen in the parent comment. I suppose it's just the way things go, until we've arrived at a more measured place.No, LLMs aren't AGI, not by a long shot. And yes, they are significantly more powerful than Markov chains.
I wouldn’t say it’s instant. These kinds of models have been out for at least 6 years (depending on if you think BERT, or GPT-2, or whatever is the first).
Their capabilities and short comings, at least in a general sense, are pretty well known.
Because "educating you better than most things" and "unavoidably generating bullshit" are not mutually exclusive. It's entirely possible that AI is better than human conversation on the spectrum of "adhering perfectly to truth" but it's far-and-away less credible than an encyclopedia or even a peer-reviewed paper.
Publishers know they cannot publish false information without spoiling their reputation. ChatGPT lies like it's life depends on it. Therefore, me and many others identify ChatGPT by it's willingness to create tangents of pure and unadulterated bullshit.
I remember many instances in childhood where I began to realize authority figures, parents/teachers/etc., that I had been trusting to teach me the ways of life weren’t all knowing. Moments of “lies” (using your term, really just being incorrect) where the cracks began to show.
I treat ChatGPT like a 90th percentile educator across many problem domains. Is it going to be wrong? Yes. Is to going to be very wrong? Yes. Is it capable of generating tangents of pure and unadulterated bullshit? Yes. But that was true of every teacher, professor, and mentor I’ve ever had.
Just because my 8th grade algebra teacher wasn’t as accurate as an encyclopedia or a published textbook didn’t prevent them from filling the role I rely on ChatGPT to fill now.
Edit: also, peer review isn’t a great example of a system that eliminates bs.
> Just because my 8th grade algebra teacher wasn’t as accurate as an encyclopedia or a published textbook didn’t prevent them from filling the role I rely on ChatGPT to fill now.
Sure - but I also cross-referenced my teacher when I was in math class. You know your teacher is wrong because you're also following the steps and showing your work. When your results digress, you cross-reference the steps and determine where you went wrong.
Ultimately that's what I fear people won't do with ChatGPT. When they see an equation up on the digital "whiteboard" they only know how to copy it down, not how to double-check the work. This happens already with code boilerplate, but I see it in HN comments when people have ChatGPT write them a nonsense treatise on whatever the topic-du-jour is. You really cannot "learn" much from a system that has no barriers or alarm system when it's outright fabricating things with no basis in reality.
My view is that widespread access to and use of generalized LLMs as trusted tools for learning is problematic partly because of the serious lack of critical thinking and basic education in the US. Which itself has nothing to do with LLMs right now, but is certainly a big part of why we’re in the current situation we are in. gestures around wildly
LLMs aren’t going to do much to fix that in the short term given the slop in, slop out problems.
> Publishers know they cannot publish false information without spoiling their reputation.
How did that saying go? You sweet summer child...
Reputation doesn't matter. It hasn't mattered for a while. There's too much confusion, you can't get no relief, and there's definitely not enough time in a day to care.
Most non-fiction publishing either is, or is funded by, advertising industry. I.e. pathological liars. You better believe most of the stuff those people publish is intentionally at the very least bullshit (in the sense of not caring whether it's true or false -- see most content marketing), and a lot of it is plain lies.
ChatGPT gets confused and fabricates stuff as much as a person speaking whatever comes to their mind. But at the very least, it's not lying to you intentionally. Which is why it's, for now, useful as a bullshit filter for the rest of the Internet.
> ChatGPT gets confused and fabricates stuff as much as a person speaking whatever comes to their mind.
Which is useless for the same reason you wouldn't "learn" from a friend that says "I just watched a pig fly!"
> But at the very least, it's not lying to you intentionally.
It's in fact worse that way. If someone lies with intent then they can at least admit it when I challenge their rhetoric. If ChatGPT can't lie intentionally, how is it supposed to know when it's deliberately telling the truth?
"Word probability matrices" is definitely lacking something. A lot of human thinking is more along the lines of shape rotating and sounds and smells and other stuff.
> As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.
That’s their definition of AGI? What does that have to do with actual intelligence?
It’s the so called “economic” definition. You’re absolutely right to call out that it’s not very philosophically valuable, but they were trying to be pragmatic — and it was kinda hard to imagine, anyway.
Of course it’s being thrown out the window now that money’s on the line… drives one to cynicism! No wonder Altman is a paranoid prepper.
How would you, personally, operationalize intelligence? Outperforming humans across a broad range of tasks does sound like general competence. That said, we do want to make sure the tasks are 1) not stupid, and 2) not cherry picked.
Requiring outputs to generate profit is a well-established proxy for human value, even if it's imperfect, and letting the economy pick tasks makes it hard for OpenAI to cherry pick.
I don't know about you, but a reliable ability to generate high value sounds like the kind of intelligence I care about.
On the meta-level, I think disagreements like this about "actual intelligence" and "really conscious" do a good job of showing how out of distribution we are on our human wetware classifiers. The terms really are fuzzy and ill-suited for generating understanding of current AI model behavior, IMHO.
Work that can be automated is worth exactly at the price it's sold at, not at what it costs. Besides, the marginal cost of AGI isn't zero - servers still cost electricity to run.
> “We’ve also said that our intention is to treat AGI as a mile marker along the way. We’ve left ourselves some flexibility because we don’t know what will happen,” added Altman
Musk accusing Altman of “deceit of Shakespearean proportions.”
It is true that some of Shakespeare's characters exhibit grotesque exaggerations of human frailties. They go mad with ambition, cruelty, and delusion, imposing their whims on victims around them. They typically show no sign of compunction until they have brought themselves to a miserable, self-inflicted end in tragedy.
It is ironic to read such recriminations from this particular character.
I wouldn't be comfortable saying we'll never see AGI in our lifetimes. I've seen incredible progress since I was a kiddo and computers were these huge machines that took up an entire desk, not to mention that only a decade prior to that, they were machines that took up entire warehouses. And now, I carry around a computer who's primary function is organizing my life and helping me kill time, and it has hundreds of times more power stuffed into a little rectangle that runs the entire day in my pocket. I wouldn't say it's impossible by any stretch.
That said, I think it strains the hell out of credulity to say a fancy chatbot is getting anywhere close to that, especially since we apparently had to give it every word mankind has written up to this point just to get it to a stage of "mostly good at grammar, still makes tons of shit up." I just don't think this is the route there.
While I agree with your general sentiment, let's not sell LLMs short. These things would be firmly scifi not even 5 years ago and they are significantly more powerful than "mostly good at grammar". They do still make shit up, I give you that.
We have AGI, in it's original definition, now - it's just human level rather than superhuman.
ChatGPT, without being specifically programmed to, can summarize a text, write a program, play chess, explain a joke, translate, explain or create a poem, and just handle arbitrary inputs in text. On some measures ChatGPT is better than human, on some measures, like ARC, it's worse - but even there it's not worse than the worst human.
ChatGPT is artificial, certainly, general, in the sense that it handles a wide range of inputs, and intelligent in the sense that it can solve problems and make predictions.
The intelligence of LLMs is likely to increase in the near term future - better hardware, better training, distilling to smaller models, architecture improvements, better augmentations (like RAG), better use of test time computer etc.
He started the company - I’m no Elon fan myself, but that’s the case. Also, you’re disagreeing with him anyway; he’s saying that starting a company who’s defining purpose was to avoid a for-profit race to AGI and then immediately pivot to ruthless for-profit competition as soon as you have a real shot at your goal (+ an advantage) is deceptive.
And yeah, I’ve gotta give it to him: tragic. Tho I guess that’s “Homeric” or “Hellenic” deceit?
> He started the company - I’m no Elon fan myself, but that’s the case.
Point out what he did that couldn't be done by Bill Gates, Larry Ellison, or Mark Zuckerburg, or in fact that was done by Jeff Bezos with his own company. And I don't mean some ephemeral "well he wanted to!" yeah big fucking deal. I want to convert a DeLorean to an EV powertrain to commute to work. I want to have a garage big enough for 5 cars and a shop. I want to have a sex bot that me and the wife can share. Anyone can fucking want something. What did Elon bring to the table besides wanting it, and money?
Being an investor is not a skill. Paying for things is not a job. Buying stuff or people is not a trade. And I'm sick to death of people buying his and other's PR saying that paying someone else to do cool shit is the same thing as doing cool shit.
Point out what he did that couldn't be done by Bill Gates, Larry Ellison, or Mark Zuckerburg, or in fact that was done by Jeff Bezos with his own company.
"If you were going to start Facebook, you'd have started Facebook." - apocryphally credited to Zuckerberg.
You do know that Bezos started his rocket company before SpaceX, right? If he could beat SpaceX, why hasn't he?
As for sex bots, that'll happen soon enough, be patient. At this point there is nothing but a few incremental advances in robotics and material science standing in the way of implementing the bots from Garland's Ex Machina or Spielberg's AI. The software side is basically working now.
> "If you were going to start Facebook, you'd have started Facebook." - apocryphally credited to Zuckerberg.
No, I wouldn't, because I have a soul and I don't want to light the mental health of the entire world on fire no matter how much money it makes me.
> You do know that Bezos started his rocket company before SpaceX, right? If he could beat SpaceX, why hasn't he?
I don't know what "beat" means in this context nor am I particularly interested in billionare's dick measuring contests which are funded in part by the taxpayer.
My point isn't that someone did it. I'm sick to death of Elon's mythmaking and I'm going to shout into the void every time it comes up until the day I die that that man hasn't done shit besides write checks. Say what you will about Fuckerburg, and I've said plenty, but at least he actually wrote some of Facebook's code.
Because Elon lied to humanity about their mission, in order to secure his position. Bezos didn't.
Can't you find any other examples of this in history? Yeah, there are a lot of them, and it's probably why humanity are effectively still slaves and stuck on this planet, years behind where we should be.
Time to wake up. Humanity can't blame anyone else anymore.
As an aside... the argument that space is less hospitable than Earth doesn't work anymore when "smart people" only say that only to avoid doing the prep work they'd need to avoid an Earth that can and will become imminently inhospitable due to up to ten different scenarios.
But by all means, guys, keep ignoring reality because you think you know better.
How quickly humans forget, and how amazingly deep their ignorance is of what only recently transpired.
> Because Elon lied to humanity about their mission, in order to secure his position.
Setting aside feelings about Elon and whether he misled anyone about the mission of SpaceX, saying "Our mission is to put humans on Mars" didn't help secure any position, except some popularity with the Mars Society, sci-fi fans and nerds. It certainly didn't help SpaceX get financial support from big money hedge funds, governments, Wall Street analysts or NASA administrators.
If anything, setting a goal of what was then only a nerd fever dream, and which even today has no clear path to profitability, hurt SpaceX's early position substantially. The only thing that eventually turned around that "screwball billionaire blowing his money on a vanity project" perception was developing a launch system able to deliver orbital payloads at a dramatically lower cost per pound than anyone else.
As to whether Elon misled anyone about SpaceX's long-term Mars goal, anyone who listened to his in-depth interviews from early on heard him repeatedly say that getting there would first require making SpaceX a highly profitable commercial enterprise because Elon didn't have enough money himself, NASA payloads alone wouldn't be enough and government can't tax a meaningful portion of GDP only to spend it on a goal most people don't value much. Even if Elon hadn't said it, it was already obvious to anyone who understands how government, finance and funding multi-billion dollar projects work. There's no other realistic way it could happen. Starlink and payload delivery are on track to make SpaceX profitable, so how was anyone misled about SpaceX's objectives?
Personally, Elon's Mars talk turned me off too. While a permanent Mars colony is inspirational for sci-fi fans and Singularity types, there are much more productive things to focus on first if the goal is to accelerate humans becoming a space-faring species. The hardest part is lifting large amounts of mass out of this planet's gravity well economically. To then immediately go drop that astronomically expensive mass down another planet's gravity well seems kind of... unproductive?
> Elon hired the right people and let them get on with it, at the very least. For whatever reason, Jeff Bezos was not capable of that.
Yes. He wrote checks. That's it. That is not a skill.
(I'm posting too fast)
> No. Elon is a guy who will himself sleep on the factory floor to get something done. He is the guy who famously said "when you're going through hell, keep going". He is the guy who bet his entire fortune on electric cars and rockets when no one thought that was a bright idea.
What does sleeping on the factory floor do apart from obstruct anyone on said floor who happens to be trying to work? This is literally the same nonsense parroted by grindset influencers, and it is just that, nonsense. If you overwork yourself, your output is notably reduced, and of worse quality, you know, because YOU'RE TIRED and you should fucking sleep.
And bet his fortune? Fucking spare me. Every one of his companies could go under and that man would still be incredibly, absurdly rich. Elon Musk, sir, is as far from poverty as you and I are from his beloved Mars.
No. Elon is a guy who will himself sleep on the factory floor to get something done. He is the guy who famously said "when you're going through hell, keep going". He is the guy who bet his entire fortune on electric cars and rockets when no one thought that was a bright idea.
To be fair, he's also the guy who called a guy a pedophile on Twitter when he disagreed with him, then threatened to buy the company for $54.20 a share because LOL 420, then fought like a wildcat to get out of it, and then, when he couldn't get out of it, told his newly-acquired customers to fuck off.
Bezos did start a rocket company, and it's been in business longer than SpaceX, but he hasn't managed to get a rocket to orbit with it. Apparently it's not as trivial as you make out.
Meanwhile SpaceX launched 87% of the world's payload mass in 2023.
Or maybe he didn't even want those things, but just said them, to secure fame, power, and access to things like secrets.
He played us like a fiddle.
Remember Mars as a backup plan? The reason why we thought SpaceX was a humanitarian and scientific effort that we could rally behind? Yeah that was bullshit. We just played to your wish to be able to trust someone and to be led to safety so we could gather talent and funding. Oh, your planet is in trouble now, and that was your last hope, and now that time is wasted, because you rallied behind the wrong person? Oh well. At least I'm head of DOGE now. pretends to be neurodivergent in order to get away with betraying and abandoning children the world over
Yep, he's probably a predator, guys, and maybe you all asked for it because you repeatedly wanted to turn over your responsibility to someone else (while consuming all the benefits of this planet in its last days) instead of leading your own species to safety instead, despite the repeated warnings over the past 5 decades of possible planet-scale collapse.
> Or maybe he didn't even want those things, but just said them, to secure fame, power, and access to things like secrets.
As much as I enjoy ripping him apart, that last bit ascribes to him a level of forethought and planning that I sincerely think is above him. I think the first one has it. The power is probably nice, especially if it gets him closer to being cool? But honestly when you dig into his history, he is just the cringiest edgelord imaginable who has spent his entire life wanting to be cool, and he just can't. He is an immense child of privilege who could have anything and everything he ever wanted, and the only thing he has ever wanted is for people to think he is cool, and it's the only thing he has never had, and it haunts him.
Truthfully I think this is why he has the following he does. It's just other people who grew up in the nerdy/edgelord spaces, who were bullied in school, who dreamt all day of being so rich that they could finally prove to the cool kids who was really cool. They see themselves in him, and his success is theirs, in a way. That's why there's just cavalcades of weird people online who will absolutely throw themselves on live subway rails to defend him.
It's honestly fucking sad, because he would sell every one of them for an ounce of attention from anyone who isn't them in a fucking heartbeat. They genuinely deserve better.
I'm no fan of Musk's political activities either but SpaceX is still working on going to Mars. Starship is the key ingredient for that and they have 25 launches approved for next year.
We’ll never see AGI in its original definition in our lifetimes. It doesn’t take being a rocket…company buying billionaire.
No, because whenever somebody says they've achieved it, someone else -- maybe you -- will say they haven't. The question will never be settled to everyone's satisfaction.
We don't know what intelligence is, or how to measure it objectively, much less how to define "general intelligence." So it's hard to say whether current approaches to AI will get there or not.
I see it as a tool to be used by people, one that may evolve into the most powerful tool we've ever built. I don't find the question of "intelligence" all that interesting.
> whenever somebody says they've achieved it, someone else -- maybe you -- will say they haven't
And as long as that discussion still makes sense it's pretty obvious that we haven't achieved it. There's a very clear inflection point, when anything deserving that label actually exists we'll have 50% economic growth a year, the 50 most popular professions will be done by machines, the cost of science will fall by magnitudes because everyone can spawn a hundred scientists with the click of a mouse, and you'll walk out the door and the world will look to you like the 1950s looked to someone from 1500. We aren't even going to be capable of haggling about arcane theoretical differences.
> OpenAI really does seem to embody the idea that all contracts are renegotiable
Why aren't they renegotiable? Of course you can attempt to renegotiate basically every contract; the question is whether this will be successful or whether this is a good idea. There is no need to appeal to Machiavellianism.
Of course everything is renegotiable. But after reneging on every serious promise you've ever made, I do think it's worthwhile pointing out that this organization is shamelessly Machiavellian. They have mistreated all of their constituencies pretty rapidly!