Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI Deal Lets Employees Sell Shares at $86B Valuation (bloomberg.com)
245 points by upmind on Feb 20, 2024 | hide | past | favorite | 286 comments



Is this the kind of thing employees had in mind when they backed Altman?


Yes, at least if they joined in 2020 or later. I interviewed and got an offer around that time, post for-profit conversion, and the pitch was that the "shares" (actually, profit sharing rights) would be bought by investors every few years.

(FWIW, I ended up not taking the position, mainly because I would not have enjoyed the specific work they wanted to hire me for.)


It must be a big offer


I was going to say, question for GP, do you regret not taking the offer? Do you know how much the shares would have been worth today if you had accepted the offer?


Do you regret not plowing all of your money into DOGE in 2020? In the scheme of the things you could do to make money with hindsight, accepting a job offer barely ranks.


Unless you were weighing DOGE vs FTC or something though and went with the latter, it's different? It's not just Oh if I knew then what I know now I could've made so much money; it's I almost made this decision, or could have, but explicitly decided against it.

I had a badly timed offer of something (I don't really remember might just have been an interview, or contracting work, on the back of some OSS contribution) at Cloudflare pre-IPO, and I do occasionally regret/wonder what-if, because it's not just a random hypothetical, it was an actual opportunity I made a decision against. (Doesn't help that it's still a company on my interested-in list, much like the trap in investing of liking a stock but wishing you'd bought it lower when you first thought of it, and so continuing to not buy as it climbs (or not even investing necessarily but just being a consumer in an inflationary environment).)


Absolutely I do. And I could have sold when my Aunt told me how the teachers had a Doge pool at work and the guy at the electrical counter was looking at his Doge and telling me about it. There are smart ways to invest in the silliest of endeavors.


Do you also regret not playing the winning lottery numbers, now that you know what they are (or can easily look them up)?


Absolutely, now that you mention it. In addition every day there are millions of option call/put plays which when I found out did well I will have absolute regrets.


I pity you as you are so full of regrets. Try to enjoy life without looking back so much


Now I regret having lived a life full of regrets.


Every time my girlfriend buys lottery tickets I tell her to buy the winning ones, and every time she gets the numbers wrong. Every. Friggin. Time.


Write out a list with all the possible numbers, and her get one of each before buying more. That way you can have better odds. The real issue will be how long it takes to buy them all.


You should try telling her to pick the wrong numbers. She’ll either be successful or rich!


The chances of making some money from stock in a Silicon Valley darling is not really comparable to having plowed money into a meme coin that was literally created as a joke.


Not comparable at all... the latter knows and admits it's a joke/money-grab and is not trying to pretend it's actually doing something good for the world.


You cannot take as many Silicon Valley stock bets (as an employee) as shitcoin bets. Therefore, making money from the latter is easier.


I actually agree with you. I think it's easier to make money on shitcoins than on on employer stock.


No. There are plenty of well paying jobs that actually match my skills. I probably would have made a bit more at OpenAI, but at the cost of not enjoying my work, and probably not excelling at it.


In the last round of over-hyped pump and dump companies the usual thing was to list the company so the shares could be openly traded.

This seems very much like a "get out now" signal to everybody involved. Greater fool farming on an accelerated timeline.


> This seems very much like a "get out now" signal to everybody involved. Greater fool farming on an accelerated timeline.

If you were an early FB employee trying to diversify your wealth base in 2011[1] it turned out to have been a bad decision in retrospect but made sense then.

It's easy to say in retrospect "greater fool" vs "bad decision" but right now it is reasonable to diversify risk and doesn't imply any bad faith.

[1] https://www.theguardian.com/technology/2011/feb/10/facebook-...


Wrong. It was a good, principles decision with a surprising outcome.


Fair, although still doesn’t imply it was greater fools buying. They just had different risk profiles.


you can keep your principles, I prefer to have captured more upside.


A bad bet is still a bad bet if you win, though.


The point of it being a good decision was that it wasn’t a bet: they were locking in a known as opposed to it being unknown. I guess you can say choosing not to play is a bet of some kind but it is a different class of bet I think.


Sure. If you had better insight or wisdom than the market in general, ye, you can claim it was not a (bad) bet. Still probably bad to go essetially all in on one stock though, even if you are 90% sure?

Personally, I thought that Facebook had peaked around IPO and would slowly go irrelevant.


I’m still holding all of my Facebook stock from around IPO time when I worked there.


what constitutes a bad bet is largely in the eye of the beholder unless there are clear parameters to help define an EV. Also different investors will have different tolerances for risk. I have extremely high tolerance for risk and have been rewarded for that tolerance with outsized gains in the market over the years that have dwarfed all of my jobs combined in income except for my most recent stint at NVIDIA.

a financial advisor would always tell you to unload all shares at each vest since you are holding more shares in the form of future vestings.they would then tell you to diversify it into ETFs in a mean variance optimized portfolio tailored to your risk appetite using standard deviation of historical returns and correlation of underlying basket of goods. this is a solid approach that produces predictable safe returns.

I would rather let it ride since I have insider information at most companies I work for and have a better understanding of future returns on that stock that the average person on the street.

I say, take average advice, receive average returns.

:)


Relax, this is likely just their retention strategy to keep employees that otherwise may jump ship and go work for meta/apple/google (who are publicly traded)


A tender offer is not a "get out now" signal. It's often a "the company is healthy" signal.


> It's often a "the company is healthy" signal.

The same was said about Stripe. Imagine buying Stripe shares at a $96BN valuation when it is now down more than 50% from the peak.

I would indeed sell some if I saw that extreme valuation of any private startup.


Actually it’s back to $70B on secondary markets: https://finance.yahoo.com/news/stripe-popping-off-secondary-...

Secondary markets always jump around because of lower liquidity. I’d wait until after IPO before making any conclusions there.


Don't do it! They are about to raise 7 trillion dollars to build new ai chips! Now with more flavour but same great crunch!


It depends on ones trust in public markets. If a large firm is only held by a few parties, then the public markets provide little benefit to the firm or stakeholders as any significant share movement would require a private deal anyway.


lol what you’re describing is literally the free market. We just don’t know who the greater fool is until OpenAI booms more or busts.


I mean, watching from the outside, it's hard to guess why OpenAI would be so valuable.

They were the first to market with their LLM chatbot, but zooming out that forced Google's hand in releasing their own, and there aren't any moats here other than "how much copyrighted data you're willing to train on."

Eventually, OpenAI/MS will be the Bing to Google's, well, Google. It's one thing for early adopters to go and seek out ChatGPT, but Google has so many products already, there's no reason to think that anybody would switch to ChatGPT just for a chatbot if they're already using a Google app for something else.


You're making it seem like OpenAI is a dumpster fire and not literally the only company that is MILES ahead of its competition. Not too mention they're supposedly a billion in yearly revenue.

Not even Google can compete.

It could also be a signal that the company recognizes that they would not be where they currently are without their workers. Maybe they're giving a reason for their workers to stay (become multi-millionaires overnight) and not go to their competitors.


They are not leading their competition in terms of technology; they are ahead in a public theater and as the first to market. The claims of "regulate us," "it's almost AGI and it's dangerous," and "let's put a moratorium on larger models" are all just hype, deceiving the public. They utilize the work of scientists and other companies worldwide without acknowledging their contributions. Certainly, they have done an impressive job, and the scientists and engineers working there deserve respect and recognition. However, they are not significantly ahead of current research. They are not even weeks ahead of research. The current landscape is a game of budgets, winning public opinion, and securing investments.


When you put it that way it kind of reminds me of FTX, they too were all about leading the way on regulation. IIRC the impetus for the entire thing coming crashing down was because the other big exchange's owner thought it was extremely hypocritical how they talked about inviting regulation while operating just like most other crypto firms, completely outside of decent financial controls.

Of course not to imply that OpenAI is the same as FTX.


Sam is more trustworthy than Sam.


Yeah I dont see what you’re seeing. Ignoring research, OpenAI is bringing tools to market that nobody else to quite close to yet. Yes, they are getting closer but not as you describe it.


Actually, I felt pk-protect-ai's post was spot on. That is, OpenAI does have, currently, the most impressive interfaces and models. But (as was famously commented on by a Google exec) they don't really have any unique moat. That is, while what OpenAI has built may be extraordinary, there is no "secret sauce" there that prevents others from copying them. And this isn't a knock on OpenAI at all. On the contrary, my understanding is that OpenAI was really the first to take the big risk in scaling their GPT models (i.e. spending the hundreds of millions to train their models before they knew what the outcome would be).


People keep saying this but after 1-2 years, nobody has gotten to their level yet.


This isn't true. Benchmarks do not define the usefulness of the models. Mixtral is much more useful for me right now than GPT-4. Look at LLaVA, or the new one, and the very impressive LWM (the text-only version is LLaMA2-based with a 512K context, which does not require TPU to run the inference). The fine-tuned LLaMA 34B is much faster than GPT-4 Turbo and less annoying, providing pretty impressive quality of results.


Who mentioned anything about benchmarks? YMMV, it depends on exact use cases, from my extensive testing using my cases, GPT-4 Turbo still is much easier to direct than any of the others you have mentioned.


> was famously commented on by a Google exec

It was just a rank and file IC.


I would not say that OpenAI is all that far ahead of the competition. I would not count Google out, and I would be willing to bet that OpenAI/Microsoft + Google will be the duopoly of the AI age.

At this point Gemini is as good as GPT4 (and anecdotally I think it's better at many coding assistance tasks, based on my experiments on the LMSys chatbot arena). Sora is getting a lot of press for text-to-video, but Google's Lumiere has already been out for a few weeks and produces pretty good results.

I have no doubt that OpenAI has been cooking things up. GPT-4 is an older model which others are only just catching up to now. They have an A+ team, a giant war chest, and a lot of momentum. But just because they have momentum does not mean that they have a moat.

I'd be willing to wager that the top-of-the-line foundational models are going to converge and become indistinguishable for almost all tasks. Even the open foundation models (e.g. Mixtral) are getting really good. Foundation models are not moats.

The players that have moats are Microsoft (with deep enterprise software & B2B expertise, allowing them to sell AI-powered software & ward off competition from upstarts) and Google (where their decade of investment into custom silicon allows them to train & run inference for cheaper than anyone else, by far).


I don't have a large amount of time to devote to this. But no, google is not out. They just choke slammed ChatGPT with the Gemini announcement.

10m context with that retrieval rate is such a monstrous leap. And to top it off, we got LargeWorldModel in the same week, capable of 1M token context with insane retrieval rate in the open source space. So not only is the open source world currently technically ahead of ChatGPT, so is Google. Which is why they had to announce SORA, because google's model is so far ahead of the competition. That's also why it will probably be ages before we get access to SORA. Now don't get me wrong, the average person can't afford 32 TPU's to run LWM, but we already have quants for it, which is a step towards enabling the average person (that somehow has 24-48gb of VRAM to get a taste of that power).

What is also striking is the fact that the new models are all multimodal as a standard. We not only leapfrogged in context size, but also in modalities. The model seems to only benefit from having more modalities to work with.

I think the statement Bill Gates made claiming that "LLM's have reached a plateau" itself indicates they don't believe they can make more money from training better/larger models. Which indicates that they already did as well as they could with their existing people, and are now "years" behind google. I never thought google could catch up, especially after their infamous "We have no moat" situation. But it seems they actually doubled down and did something about it.

To a lot of people, last Thursday was a very nihilistic day for Local Models, as the goalposts shifted from 128-200k context to 10M tokens with near perfect retrieval. It's literally insanely scary. But luckily we got LWM, and that means we have only been 10xed.

Now the local people will work on figuring out how to bridge the gap, before being leapfrogged again. What is really insane is that, we have had LLAMA2 for over a year now, and nobody else figured out how to get this result from it, despite it being around so long.

I still believe there are modifications to the architecture of MoE that will unlock new powers that we haven't even dreamed of yet.

Sorry, this was supposed to be well thought out, but it turned more into stream of consciousness, and I honestly had no intention of disagreeing with you.


> But luckily we got LWM, and that means we have only been 10xed.

If I remember the paper correctly, it was something about a 4M context in there. So not 10x, but 2.5x.

> What is really insane is that, we have had LLAMA2 for over a year now, and nobody else figured out how to get this result from it, despite it being around so long.

This isn't true. For now, the task of extending context to 10M tokens is brute-forced by money (increased HW requirements for training and inference and increased training time are also a financial domain). And for now, there simply is no leapfrogging solution for open source or commercial models, which will decrease the costs by orders of magnitude.


Calling it public theater is pretty laughable. The amount you get for $25 a month as a consumer is insane.

I literally have a 24/7 consultant with surface level understand of any topic in human history. This consultant also happens to be an amazing artist for $25 a month that gives me the rights to commercialize any art piece they make for me.

This is extremely hard to beat. This will be extremely hard to beat.

As a business they are succeeding.


You get a "consultant" whose every word has to be verified against an independent second source, and an """artist""" containing a digest of essentially everything actual human artists have ever uploaded to the web, who's somehow still only able to generate derivative slop utterly undifferentiated from what it's spitting out for everybody else.

LLMs are useful tools, but if they don't lead to something you can actually somewhat rely on to generate correct/consistent/factual results (see e.g. the recent Air Canada chatbot lawsuit) then the hype is a bust. TBD.


> You get a "consultant" whose every word has to be verified against an independent second source So, no different from YouTube Videos or [insert any website here]? That’s still a ton of value to capture and repackage. Especially when it’s packaged into an interface that humans find much more natural (ie. “conversation”) than the existing offerings.


Hence why I said "LLMs are useful tools". But the current hype pitches them as much, much more than ancillaries to traditional web search, hence why I said "[...] then the hype is a bust. TBD."


I just don't believe that LLM's have to reach the impossible bar you set of near-perfect correctness in order to be world changing. Thus, I disagree that any outcome less than that should be considered a "bust".


I'm not sure what else you can call a scenario in which LLMs, currently hyped as (among other things) near-term replacements for human workers in all sorts of professions and positions, are not eventually able to satisfactorily (and cost-effectively) replace even the most low-skilled, largely script-driven customer service workers.

The bar I'm setting is far from "impossible"; even human children generally won't seamlessly confabulate when you ask them a question they don't know the answer to. Again citing the recent Air Canada case, these models can't even reliably answer simple questions that are definitively and objectively answered in documentation that is presumably made as freely available to them as is technically possible under the limitations of current technology.


>LLMs, currently hyped as (among other things) near-term replacements for human workers in all sorts of professions and positions

No well-informed sources are saying that. If you're saying that mainstream reporting and other non-tech folks are wrong about what the possibilities are then... obviously.


> No well-informed sources are saying that.

Yes they are, actually! I've talked to people who obsessively read practically every LLM paper that passes through the arxiv, with undeniably deep and broad knowledge of the current state of this tech, who seriously believe it's going to surpass humans within a year or two. That it may already have, in the deep dark top secret labs beneath OpenAI HQ.

However,

> If you're saying that mainstream reporting and other non-tech folks are wrong about what the possibilities are then... obviously.

If it was obvious then why are you still replying to my comments, which have very obviously been specifically addressing the mismatch between hype and reality? If the current approach doesn't scale, the hype will have been a bust! Objectively! That's what I've been talking about this whole time!

"Mainstream reporting and other non-tech folks" is a bit disingenuous, though. The primary drivers of the current unrealistic hype are software vendors and associated clingers-on looking to make a quick buck. They'll say anything, regardless of whether it's true, and as a result of those mostly-falsehoods our public lives will be flooded with awful AI tools that make everything shittier and more difficult. I can't wait!


No company in the history of companies has ever advertised themselves with the concept "we're going to destroy the world", primarily because it doesn't actually work. OpenAI are true believers, they say that sort of thing because that is what they genuinely think.


No single neural network currently has the capability to destroy the world, and it will not occur until the architecture for NNs is altered to become self-driving rather than reactive. Even then, it will not be sentient. The deception occurs when the attention shifts from the people operating new powerful technology (tools) to this technology itself.


Meanwhile, every AI company ever: "We are very excited about agents. We are working hard on agents. We want to roll out agents as soon as possible. Also, persistent memory and online learning would be nice to get."

Like, are you following the things that OpenAI and Deepmind are saying at all? The things that make current LLMs not a threat, they aim to tear down as soon as they can arrange.

OpenAI just released a video network, and one of their core touted benefits was that you could use it as an action controller!

And, um. Do you really think, when the AIs can take a simple prompt and turn it into a ten thousand step plan that requires dynamic skill acquisition, resourcing and persistence, that generating the prompt will be the one single task that stumps them? When we are at that point - and to be clear, every leading AI organisation is sprinting to reach that point earliest - then the difference between doom and safety will be one sentence: "When you are done with that, generate a new prompt." This is not how a world with a long expected lifespan looks.

edit: To be clear, I'm still not accusing OpenAI of making up the doom stuff. Even though when I phrase it like that it sounds like they're directly working on things that obviously end the world, which seems contradictory, I don't think they see it like that. To be honest, I can't explain why any doomer works at OpenAI, except in the way that people sometimes move towards explosions and gunfire. I think it's just a bug in the human brain. We want to have the danger in sight.


The agents and action controllers are the reactive components in the toolchain, aren't they?


"Self-driving" is just "reactive" with an open recursive structure. In principle, a network that processes a prompt, generates a plan, recurses a finite number of times, judges how well it did, generates a training plan to improve, outputs a corresponding follow-up prompt, and then waits for you to press a button before it repeats the whole thing with the follow-up prompt, ad infinitum, is still "reactive" - but nobody would argue that whoever presses the button is performing irreplaceable cognitive labor.

So I just don't think this captures an important distinction at the limit. If a system can generate a good action plan, turning it into an agent is just plumbing.


Actually, we can't be certain that humans themselves are not reactive. It is just that their reactions are either built in (self-preservation, reproduction), or based on much broader input (sensory, biochemistry, etc.). The current level of reactivity of the LLMs is very limited by their architectures, though, and as long as these architectures stay in place, you can't expect them to be "self-driven".


I just don't think this is the case. I suspect reactivity in LLMs is mostly limited by training. Human text data is just not suited to the way an AI needs to output data to plan long action chains - justification, not reasoning.


This might be true as well along with what I said.


> literally the only company that is MILES ahead of its competition

Its worth noting that, at least when working in a truly novel and untested industry, whoever is leading the pack would likely be the first to know when they tech hits a dead end.

By no means am I saying that's actually the case, but there is still a real possibility that LLMs and the underlying architecture don't pan out with regards to the company's goal of developing anything resembling an AGI. If there is a core challenge with the architecture that doesn't scale, OpenAI would just run into the wall when everyone else was still MILES back.


> but there is still a real possibility that LLMs and the underlying architecture don't pan out with regards to the company's goal of developing anything resembling an AGI.

There is, but in one sense, who cares? The technology is already super useful, even if it ends up not being the end all and be all of AGI. That is, there are really only 2 options:

1. LLMs are an important stepping stone on the way to AGI, in which case OpenAI is in a great position as the company with the best LLM.

2. LLMs turn out to be a "local maximum" in the search for AGI. I think that even if that actually is the case, OpenAI is still in a great position with much of the infrastructure, data, expertise, etc. even if they require totally new model approaches.

Also, I wouldn't worry so much about #2 because if we do ever get AGI, our economy would surely collapse, or else it would look completely unrecognizable to our current economic systems. That is, I'm always struck that when people talk about AGI fears they talk about things like Skynet and misinformation, but nearly our entire society is organized around people selling their labor for a price. I don't know what our economy will look like when/if AGI becomes real, but I do know it will change drastically if it means that the vast majority of people won't be able to price their labor more than $0.


> LLMs are an important stepping stone on the way to AGI, in which case OpenAI is in a great position as the company with the best LLM

We don't actually know this though. Assuming an AGI hasn't yet been developed, we don't know whether LLMs will actually get us there. We know they seem to have more use than previous ML systems, but until we have an AGI we can't say what will get us there.

Further, are we really assuming that developing AGI is either a shared goal or a given regardless of what people would actually want to happen? It sounds like we agree on the fundamental impacts an AGI would have on our current societal structures, do we as a society not get a say in the change? Have we effectively blessed a handful of people working in the private sector to make that decision for everyone? And if so, when do we grapple with the moral questions, like whether an AGI has rights similar to humans, or if unplugging one is murder, etc?


I think you misunderstood my comment. I gave those two bullet points as mutually exclusive options, where one of them must, pretty much by definition, be true.

That is, you responded "We don't actually know this though. Assuming an AGI hasn't yet been developed, we don't know whether LLMs will actually get us there." Exactly, we don't know if this is true, in which case if it's false my second bullet point "LLMs turn out to be a 'local maximum' in the search for AGI." is the true statement.


I guess I'm not quite sure how those are the two options. For the second scenario, if LLMs are the local maximum and hit a wall, OpenAI would only be in a good spot if the challenges hit don't invalidate core differentiators of their company.

For example, if GPU-based systems are the limiting factor they wouldn't have the edge. If the problem turns out to be in the human skills and background needed to develop an AGI they similarly wouldn't have the advantage.

Wouldn't there have to be a third scenario, where they walked down a path that doesn't pan out at all and requires a fundamental rethink effectively going back to square one?


If human skills behind the model are a limiting factor, then OpenAI is in a pretty good position: they have a headstart, and software tends towards "winner takes all" because of the low marginal cost. Look at Google search for a great example of this in a related (perhaps even the same...) market.

Of course, a pretty good position doesn't guarantee winning, but GP didn't claim that.

But yeah, there are probably outcomes in which LLMs are a local maximum and ultimately dead end, and OpenAI will have a hard time holding on because the market turns out more competitive. And somebody might beat them to whatever the next important invention is. We'll see.


It doesn’t need to pan out as a way to create AGI though. It just needs to be useful and profitable.


At least from what I've seen and how I've seen others use LLMs, the general consensus seems to be that they're useful for the basics today but are more of a promising tech than something that's already landed.

If OpenAI features were to freeze at what we have today I would be surprised if the company stayed around without a major pivot.

Again I'm in no on way saying this is actually the case, only a hypothetical since the tech is still very new and we don't know what we don't know.


> If OpenAI features were to freeze at what we have today I would be surprised if the company stayed around without a major pivot.

This isn't really true. GPT4 is incredibly useful right now and being used by lots of business processes without any improvements being needed.

Every incremental improvement (eg Gemini 1.5 huge context window) opens up even more possibilities.

AGI isn't required at all.


Very much this. It's incredible what we can do with it - our limiter building louie.ai has been more ourselves than OpenAI. Though of course a better LLM (smarter, faster, cheaper) does enable more.

The main problem with freezing is the moat disappears. Others are steadily catching up, and on specific benchmarks, even surpassing. Groq.com is insane wrt perf.


Groq's competition is nvidia, not open.ai OpenAI can run their models on groq machines as well.


groq has been public on launching an api, so running groq+mixtral vs openai.com (vs others) is a real topic, esp. when ~half of openai's revenue is their api


But it’s costly. It would be even more useful if it were 1000 times cheaper than it is now. And we don’t yet know if OpenAI earns more than it spent on renting cards or not


Sure. Agree with that. But we can be very confident in the price of compute decreasing over time.

(Although we can make inform estimates on OpenAIs cost structure for serving models well enough to guess they are probably breaking even on it, or pretty close to it. Eg listen to the gradient dissent interview with the Together AI founders and it’s clear they are doing the same, and unless you think OpenAI is remarkably worse at serving technology it implies they are in the breaking even ballpark)

And we still don’t need AGI for it to be a great business.


What do you mean by tech dead end? Has Oracle reached a tech dead in? Are they still a successful business? What about SalesForce, did they hit their tech brick wall yet? Are they still a successful business?

I mean we all work in an industry where we literally see inferior tech win out all the time, not just in usage but in money made.


OpenAI's stated goal is to develop an AGI. My point was just that until they get there we'll have no idea whether LLM tech will lead to an AGI.

The company could always pivot as they learn the limitations of LLMs and potentially find other options, though I would argue that their current valuation is extremely high and based largely on the promise of tech they really haven't developed yet. If roadblocks are hit they could be in a tight spot having to live up to such high expectations.

That said, as much as I'd like to see the business goal of developing an AGI crash and burn, I wouldn't bet against them. I just hope we somehow solve the seemingly unsolvable alignment problem first.


To be fair, they are not miles ahead of everyone completely - they're good at shipping products. Other companies (such as deepmind/Google) have capabilities approximately on par, or sometimes better than OpenAI - they just aren't very good at making products out of their research.


Noone has released a GPT-4 competitor after a year.

I'd say that's quite a lead.

Maybe this has changed now with new Gemini model, but still that means one year ahead of everyone.


Again, that's a webapp product for the public - not state of the art research. These are very different things.


Research without ability to create better models and products don't mean much. If the research is so good we should see somethings coming out of it.


Or OpenAI is just better at marketing / hyping their products and creating the illusion that they are ahead.


Wow, people working for a company with the goal of profiting financially from it? That's really sad.


Well I mean they are supposed to be non profit. Not that I’m complaining, OpenAI does great stuff.


Staff (can/do) get paid at a not for profit


Do you have the idea that non-profit means that everyone is a volunteer? Non-profit just means "no profit" not "no pay"


Non-profit doesn’t mean zero revenue.


The problem is when speculation is a (large) part of you compensation. You might not do what's right in your work.

The same is true for the real-estate market: When a corner of the market (eg. apartments in the city) outperforms the rest of the market, you might stop doing what's right for you and start speculating (being in the process of looking for a place to live, I can see several people who lost significant amounts after having bought in 06/07 and sole 10 years later - hopefully for them, they bought the right place to live and did not care about the losses).


Nothin’ wrong with that. It’s the lying or “divergence from stated goals” that’s bothersome.


Don't you think it changes their goals? They now only care about the implications of their work as long as it is financial. Society? Fuck them, we've got big money to make!


OpenAI is nothing without its people


This is why they did.


Or they trusted Sam’s ability to run an AI company better than the 2 other e/acc advocates who wanted to pause AI development and brought in a temporary CEO who was on a podcast that same year calling for a multi yr pause on AI development.

Everyone points to money but they are also working on the most exciting technology in recent history. Crippling both that and likely their long term job prospects vs Sam, whom they never got a straight story for why he was being taken out.

People love to over simplify and reduce things while glossing over all the nuance.


The only reason they didn't want the decels to do their shtick is because that would mean, indeed, loosing heaps of potential money. But you can believe in the fable about the most exciting technology yada yada yada


You mean EA, not e/acc?


First, it was bs panic by board members and everyone knows this and that in turn ruined everyone’s bag, and for no good reason. Think about it, they wanted to disband the entire company because the world was about to end was the reason


Yep, now with the release of Sora we see that they're actually making the world a better and more creative place. People love to spout doom and gloom for no reason.


that's what I keep saying

nightmare fuel puppies playing in the snow is what will make the world a better place


People definitely like to spout doom and gloom, but that doesn't avoid the fact that OpenAI is actually trying to develop artificial intelligence.

Ignore the fear mongering or political/military concerns for a moment. There are very real questions that have to be answered, from logistical questions like how we'll recognize when we develop an intelligence or the moral questions like does an AI have rights similar to humans.

Diamond nanobots ripping us apart to repurpose our atoms isn't a terribly useful conversation. But whether anyone can own an AI, whether turning one off is murder, or whether AIs deserve all the same rights and humans absolutely are. IMO those should have been prerequisites before we even considered questions like how to align an AI or risks of their existence, we shouldn't be going down that road at all if we're setting ourselves up to get caught with our pants down.


Phase 2, of Sam playbook to a take over a company.

See what he did at Reddit below.

He even admits to it in the thread.

https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...


Do you feel this is a bad strategy? Or what?

Liquidity for employees is great. Nobody is forcing them to sell.


It's not a bad strategy. It's keeping an eye on Sam Altman's agenda. He is a political animal that few people on HN seem to realize.

What makes him great at his job isn't the typical techie path of becoming great at technical and building a great product. His ability is the ability to influence and gain power through good and bad means.

Today's strategy may be good for employees; tomorrow maybe not. And no, he isn't thinking of what's best for employees. They just happen to be aligned at the moment. He is goal oriented regardless of the means. That's just my ill-informed opinion though.


If you had to speculate, what is the end goal? Power for power’s sake?


Purpose in life. Give yourself hundreds of millions of dollars. What now? Many people might say they'd go retire on a beach, and that's really fun for like a year or two. But even endless self indulgence of that sort becomes quickly pointless.

For all of us stuck in the hamster wheel, purpose is easy - because it's forced upon us. But for those who manage to step outside of the wheel, what now? Taken to extremes you even see things like 90+ year old Warren Buffet deciding to step back in the wheel, continuing to show up to grind his days away at the office.

If you appreciate that lots of behaviors of people are them simply looking for a purpose in life - a lot of things start to make much more sense, running the gamut from political extremism, activism extremism, or people who already have practically infinite wealth just turning life into a game of 'make number go up.'


Cyberpunk utopia, by his words.

Cyberpunk dystopia, by those who understand what he’s saying.


Money, power just helps him get more.


I always thought it's the other way around.


There is a point at which marginal money is worthless. Usually it's nation states that reach that point...


I think people like power because it helps them get more money. You can easily get more power with money because you just buy people.


He wants to be in the tech billionaire club, that's validation that he didn't fall ass backwards into success.


I think it makes sense. As long as Microsoft is cool with it, they'll keep providing the compute power necessary to keep OpenAI alive.


Of course microsoft is cool with it. They reinstalled sam as ceo specifically so he could do this.


I don’t see the parallels at all. Reddit was floundering and struggling to find revenue and funding. OpenAI is thriving and people are desperate to invest in it (hence this offering at an incredible valuation).

Liquidity for startup employees is a good thing, not a conspiracy maneuver.


It will be interesting to see if this ends up reading as a preemptive play.

If OpenAI hits a wall and the LLM/GPT trail runs dry, or if regulation somehow stifles the industry, it would be a very smart move to offer liquidity now while the fortunes are there for the taking.


I think it's less about LLMs suddenly hitting a theoretical ceiling or politics getting a grip on the industry. But there is a very real threat from actual open AI research. Free models have advanced remarkably and if you choose to, you can run something between GPT3.5 and GPT4 all by yourself right now. With the spread of cloud GPU providers, this could become cheaper than anything OpenAI offers very soon. I fully expect a GPT-4 (or better) level open weights model before this year is over. Right now OpenAI is desperately trying to differentiate itself from other chat model providers by offering tons of side features on the backend that have little to do with the LLM itself. That might lock in some of the bigger customers for a while, but this strategy isn't sustainable for long in the current climate.


Also open weights models, even if not open-source, allow to get rid of censorship and alignment.

Most US models are so heavily censored, that even Chinese (and supposedly censored) models like Qwen-VL feel like open-minded.


well, in light of this groq.com, more to come Im sure, and the fact that people absolutely take it for the same thing that chatgpt is... well - they are indeed in for some downfall, and your comment may be very on spot.


Not sure, it might be a way to judge which employee is loyal to the company


I think holding stock at the place you work signals z few things, but loyalty isn't one of them.

Most companies aren't tradeable, and most employees don't have equity, so its not like owning-where-you-work is a default position.

Most investment advisors caution against having a significant part of your assets in your employer, because you expose yourself to excessive risk. (If the company tanks you lose you job, and savings, at the same time.)

The question to ask is this; imagine you won a million $, would your first instinct be to invest it with your employer? Probably not. You would get an advisor, make a plan, build a balanced portfolio, diversify risk.

So to me, a signal from employees not selling some of their stake (assuming its significant) is that they're poor at financial planning. Maybe that matters, maybe it doesn't.


The majority of the advisors employees could afford just follow the crowd. They are successful because "number always go up". An octopus could probably do it as effectively as most advisors.

I remember one guy I talked to just after 2022 escalation in the Russo Ukrainian war and his point of view was, "this is great, Russia will be defeat in no time and then there will be all the rebuilding economic activity". I don't believe the moron came up with that fantasy by himself, he was told.


I'm not thinking of financial advice here as "which stocks to pick", but more prosaic things like diversification, market sectors, growth versus dividend, high and low liquidity, short and long terms, and so on.

A "plan" considers your goals, your age, your income, your responsibilities, and so on.


While this is completely correct, startup founders inhabit a reality distortion field of their own where anything short of full and absolute devotion to the mission is seen as "not being serious". It's not enough that you like the job, and are happy that it pays well, you need to be a card carrying member of the club.

This makes sense for companies that have like under 50 employees, but far too many founders expect to hire zealots well into the 500+ employee range.

This same shit shows up in interviews too, where candidates are expected to profess just how much they would LOVE to work for that company, or how much they care about the mission. Yadda Yadda.


Yeah absolutely.

On the up side startups don't have equity you can trade, so you're unlikely to demonstrate disloyalty this way.

And to be fair, if you go work for a startup you know what you're getting into. (And hopefully you know that the equity they give you has about the same value as a lottery ticket.)


Nah, you couldn't blame early employees for at least partially cashing out. I'd probably sell 80%. Anyone who hangs on to all of it probably has a gambling addiction... Let it ride! At what point do you assign sufficient gambling problem or disloyalty?


who did reddit just make a deal with?

sama plays many games, it’s never about a single company.

why a single company trusts him is a mystery.


Wow. This is something else. No wonder I couldn't make it in the VC funded world.


“Take”? More like “buy for a very good price”, nothing wrong with that.


Was this conspiracy theory ever tested in court? Or by an investigative journalist?


[flagged]


I think there are 2-3 different things going on here:

(1) Yishan is a master troll in the style of some of the very best from Usenet and old Slashdot. Every few months you'll see him pop out a other banger tweet with 2M+ views just for the glee of the engagement. (He wrote the 'best Paris baguette' counterfactual tweet for example.)

(2) Yishan is also very good at using those troll style skills to shift the Internet user media zeitgeist. Think like Paul G's "submarine PR" but for the median reddittor opinion instead of the press. He famously shifted the opinion on /u/ekjp with just 2 or 3 well crafted posts. Hard to remember how much that crowd hates her and how they've mellowed now.

(3) Yishan has also said, I think with total sincerity and outside any kayfabe, that his wife has long covid and that he's very sensitive to the serious long term health issues she developed. See e.g. https://twitter.com/yishan/status/1571393616928714752

I don't doubt the sincerity of his beliefs here.


(1) You can't really go based off "views" on Twitter. They don't really mean anything, just that someone scrolled past the tweet. I found that baguette tweet you were talking about, and it only had a couple thousand likes. Nothing about him seems like a master troll, that's a cop out in the style of "joke's on them I was only pretending".

(2) I think that was more down to him being the former CEO with access to inside information rather than a master manipulator...

(3) His wife having long COVID doesn't give him a free pass to promote literal conspiracy theories, where all the world's governments and press are working together to hide the truth from everyone.


It is very generous of OpenAI workers to help everyone else get rich. How nice it is to see when a mass of employees divest before what will surely be a hockey stick graph of unlimited growth


i think the stick has already been swung my friend


I think he is being sarcastic


As an employee, you definitely want to cash in on your first few million, then you can still be around when the hockey stick graph either goes unlimited or crashes to 0.

Either way your future is secure.


I know people don’t love OAI but good on these people for the well deserved pay day


Also known as unbalanced concentration of wealth, which is what tech has been enabling at a faster and faster pace for the last 25 years.


This is probably a deconcentration event.

Rich people with a lot of money giving said money to smart working class nerds that happened to be hired by OpenAI.


How anyone can assume that the employment demographic of OpenAI is made up of "smart working class nerds" is beyond me.

They hire top talent with regards to AI and AI research. I'd wager a majority of them are less working class and more smart, trained, academic nerds with an unreal amount of PhD's in the mix.


very likely the majority of them have to work for a living, hence "working class"

many are probably immigrants as well

edit: according to Wikipedia, some people use "middle class" for white-collar workers


We would need to know more about the demographic of said people.

These shares probably end up in pension funds. So in the end it is probably going to be primary school teachers from Minnesota that pays Silicon Valley based software developers.

Unless, however, they continue to create value for the pension funds. For this we will have to wait another 15 years to know.


Top 5% of earners in the US are making $340k USD[0]. Is it unreasonable for people working at OpenAI to get 2-3x that?

If you want to be angry at the unbalanced concentration of wealth, take a look at Bezos[1].

[0] https://www.unbiased.com/discover/banking/how-much-income-pu...

[1] https://mkorostoff.github.io/1-pixel-wealth/


Why do you feel they deserve that? For making the Internet even more of a wasteland of automated junk?


you really can't see any benefits to the technology they are creating?


No, not really. And certainly not to the degree that the above commenter was assigning. It's just enabling the world to become even more complex.


There's some benefits, but also massive, massive society destroying downsides. In retrospect, the downsides of social media like addiction, social isolation, and political misinformation, radicalization were hand waved away. We were told "THINK OF THE UPSIDE OF SOCIAL MEDIA, THINK OF THE CONNECTIONS" and now think of how isolated and depressed and addicted everyone is now. The exact same thing will happen with AI and VR.


It’s possible to be angry at both!


So now on hackernews the mere act of earning more money than someone else is a sin, especially when it's done by workers creating an entirely new market of products and services that we couldn't have imagined a mere 5 years ago.

I would love to unsubscribe from this particular strain of brainrot that has infested midwit discourse, but it seems to be everywhere.


(Take with a pinch of salt, I don't know economics or social science.)

It seems that even though economic growth raises all ships, just some more than others, inequality causes social unrest even if they were in a better position than before (e.g. having access to new innovations and a higher standard of living).


> the mere act of earning more money than someone else is a sin

It is pretty universal among the animal kingdom https://m.youtube.com/watch?v=-KSryJXDpZo


employees more commonly having equity in the company they work for is quite the opposite of this


People who make things other people want make more money, news at 11.

Sorry to break it to you but a solid 80% of the workforce exists to support the 20% actually creating wealth. Nobody is getting rich driving a truck for Amazon, or building said truck for them. It’s the guy who ordered the truck who’s getting rich.


Pretty insane for a "non-profit".

Yes, I know the non-profit owns the for-profit, but it's all clearly a cleverly (?) designed Trojan horse and smoke and mirrors to pump and dump like every other startup.


Anyone at private companies that has been able to participate in these share sales -- how does it work? What platform is used? Do you sell back to the company and then just get an extra large check that month? Can you trade it for shares of something else?


In my case (Cruise) I got offer once a quarter to liquidate any number of my vested RSUs at a price set by some third party assessor (roughly equal to last round’s valuation). If you sell the stock and get cash in your brokerage - amount needed to cover taxes. So basically like regular rsus except price is static

More info (although I personally didn’t use cart for this): https://carta.com/blog/tender-offer-faq/#


I've worked places that have used Carta for this. Sometimes the company buys the shares back, but I think the more common case is that it's new investors. You do get a check, but it's separate from your regular one. Sales like this are long term capital gains, so they're taxed differently than your regular income. No you cannot trade it for shares of something else [0].

[0] - There are investment vehicles (exchange funds) that do this for publicly traded stocks. They're a good way of diversifying your portfolio if you're heavily weighted in one stock, but they typically keep that capital locked up for seven years which is not great if you value liquidity.


In my case, I signed a part of my shares off and received an XL check (after paying XL taxes). That was a "take it or leave it" kind of deal, no platform was used.


long ago in some deals, you the junior person had to get the money to pay the tax on the transfer before you had a right to sell.. therefore, there was an intermediate step where the junior person might have to borrow money to complete the deal; failing borrowing, the junior person may not be able to take the compensation.


thats still common in less sophisticated startups and markets (they think they are sophisticated though, it just depends on which VCs gave them which legal counsel)

but there are a lot of lenders for cashless exercise


So it was a buyback?


To the best of my understanding, it wasn’t a “buyback” as the money came from the new investors and not from the company itself. The new owner of the options (or shares) that were previously mine is the new investor — and again, this is what I understand, it might or might not be the actual case. None of it is my business or of any interest to me, though.


Technical term is “tender offer”, not necessarily buyback - other investors can participate too


My previous employer had a tender offer as part of a new funding round.

I was an ex-employee already and had exercised my options within the 90 day window of resigning. If I remember correctly, we used Carta for both exercising the options and the tender offer.

For the tender offer, the company set a deadline and a price per share. Current employees were limited to selling a certain % of their options/shares. The website enforced the same restriction for me, which seemed like a bug, but I didn’t plan on selling most of my shares anyway. I chose a number of shares in a web form, clicked a button, and got cash direct-deposited. I had a larger than usual tax bill but I paid it while handling my usual annual tax return. No penalty on the taxes due to the “safe harbor” rule.


Assuming an outside investor vs a buyback, tender offers are usually done through a platform such as Nasdaq Private Market. Fees are usually much more reasonable vs through marketplaces. Funds are wired at closing. There’s usually a virtual data room available with financials you can share with an attorney and/or financial advisor.

https://www.nasdaqprivatemarket.com/


> Fees are usually much more reasonable vs through marketplaces

Fees on tenders are much higher than in open-market trades, they’re just baked into the buy side. The handshake deal management does with tenders is underpricing the offering so managers can add fees on top. Where open-market transactions happen alongside tenders, at least on the institutional side, it’s not uncommon to see 20%+ premiums.


What are your thoughts when transactions at open markets aren’t clearing above the tender offer price?


> What are your thoughts when transactions at open markets aren’t clearing above the tender offer price?

It signals a dysfunction. If before, information asymmetry. This, fortunately, corrected quickly. If during or after, company prospects may have changed since the tender was announced, or the company is restricting transfers to cronies. The latter is quite common, and a major profit centre for growth capital VCs.


Appreciate it.


Accredited investors sign up to private market brokers such as EquityZen, Forge or FlexTrade to match sellers of company stock (typically employees) with buyers (mostly outside investors).

Depending on market conditions there can be private fundraising through those marketplaces which employees are able to directly sell their existing shares if they want to. This also depends on the company as well to allow this.


Not FlexTrade, they are a trading systems ISV, not a private market broker


Does anyone know if openAI shares are listen on equityzen or will be?


This is an uneducated opinion, but I'd be absolutely shocked if it's not a major investor that is buying effectively privately.

Random people won't be able to get shares.


It’s up to the individual employee.


You don’t sell back to the company. Nasdaq and other exchanges have platforms for facilitating secondary market sales. You can’t trade it for something else, it’s a sale for cash. And no, it doesn’t come into your paycheck you nominate a bank account using the platform.


Where's the moat for OpenAI? How does the company not end up as just a feature stitched into Azure somewhere?


I'm surprised to see this comment. I think OpenAI has a huge moat. Companies like Google who are pouring insane resources into beating OpenAI are struggling to. I think they have the potential to be worth $1T. I think the moat will continue to grow.

I haven't tried Gemini 1.5 yet, but every single model I have tried haven't come close to GPT-4. I don't care what benchmarks say - they really don't come close.

If Gemini 1.5 is competitive, that was a full year to catch up. I'm sure OpenAI has other things cooking.


I've read several reports Gemini 1.5 being excellent, I have on reason to doubt those reports because we can slag Google off all we like but there is nothing magic about what OpenAI is doing and it's only a short matter of time till others can do it too.

Again, you might not like Google but I mean, they have a pretty kick as AI team and all the money and hardware in the world to catchup for lord's sake.

People talk and there is zero chance anything OpenAI does will be kept secret forever.


I hope the tone of my comment did not convey I thought poorly of anyone here. Google has done incredible work in AI.

Which I think makes my point that it took a year to catch up a damn convincing argument that OpenAI has something special going on.

And what are you talking about nothing magic about what OpenAI is doing? LLMs are about as damn near close to magic as we have. It's incredible it works at all. They are so damn cool.

And Sora is crazy impressive too.

3 years ago, I wouldn't have guessed we would be anywhere near where we already are, and I was in the thick of it, doing ML research and engineering at that time.

Eventually sure, but things are moving so fast!


It's all impressive, in 2 years you'll be bored with it and new things will be happening, new companies will exists, new ideas will emerge, and on it goes.

Once you've generated your 50th short movie with Sora 5 you'll be sick of that too.


I'm not so sure. I'm not bored with my OS, or IDE, core parts of my day to day life. I'm not bored with YouTube.

If they are tools that provide value, it's not about entertainment. If I can learn better or do things better with their aid, I'll keep using them.

I'm much more interested in generating a 3blue1brown quality educational video than season two of firefly.


I mean, you don't use those things and sit there in awe of them everyday.


I mean.. is there anything magic about what Nvidia's doing?


No, and what's your point?


Training & individual user preference data - specifically the data being generated by people interacting with ChatGPT and DALLE. Maybe Google can out data them with all the data they currently hold, but they'd better hurry before the network effects get too strong in favour of OpenAI.


In an exponential game where the prize is ASI, being just a couple of steps ahead may turn out to be enough.


Try training and running really really big models.


The old industry question is whether it's complex and protected enough (or the separation wanted by the consumer, but that's an old guy's dream) that the owners of the production equipment can't do it on their own.

Azure, AWS, Google are easy candidates, Nvidia might even be proof.


You mean now. Commodity soon.


It's going the other way. The number of levers to pull around architecture, data amount/type/quality, hardware optimizations, decoding optimizations, evals, system architecture (distinct from model architecture), alignment tricks and various other things means the complexity is just getting started.


Because of scaling advantages, it likely never will be. Even if you could run a GPT4 quality model on your laptop (you can't), by the time you can, Open Ai will be running a model that uses orders of magnitude more compute.


This could easily end up like the PC revolution though. ChatGPT being the mainframe of AI, and when the hardware catches up and performance is “good enough” we’re all running our own models all over the place.


I dont think compute capacity will be the determining factor for moat. It will most likely be access to various types of data and I believe on that front Meta and google are way ahwad


Stupid question, but why aren't they doing it already? What's their bottleneck right now?


Try shoving a couple trill (estimate) parameters in memory. Or, shuttling it back and forth from disk as required to generate a single token. And then shuttling that into the chip. Assuming they are using FP16 it's 4 TB give or take. They aren't kidding when they call them "Large".


I don't understand ai valuations right now.

NVDA has market cap of 1.8T and all they're doing is the middle part of the value chain.

OpenAI is doing all the actual productization and they get a valuation of 20x less?


The catch is that there are >10 startups and big-tech teams trying to do what OpenAIs is doing, and at least a few of them appear to be competitive. One or two of them may emerge as a winner. But there's only one Nvidia.

With that said, I do believe Nvidia is overvalued - if it triples profit its PE ratio would still be 30 (i.e. 30 years to return on investment), while there's a fairly good chance someone would catch up to them in 10-20 years.


I’m pretty sure you’re looking at trailing PE. If you look at forward PE it’s only about 35. If you tripled profit, we’d be looking at about 12.

Using trailing PE shows an inaccurate picture for a high growth company so it makes more sense to just take the last quarter and project forwards.


Thanks, I stand corrected. I guess at 35 times latest earnings it's more reasonably valued. Nvidia's growth in just the past year was stronger than I expected. At this level we need more nuanced arguments.

The next thing to explain the discrepancy between Nvidia's valuation and OpenAI's would be that Nvidia's monopoly position effective eats into the profits of the AI startups for the foreseeable future. Had OpenAI already been profitable, its valuation would have exceeded 86B.


> I guess at 35 times latest earnings it's more reasonably valued

If you are willing to just take quarterly revenue which I think is reasonable for NVidia, it is valued at around 40 times the current estimated earnings for this quarter which isn't too overvalued.

The bigger thing I worry about NVidia is not current earnings but the possibility that the earning won't last when either AI wave fades off or competitors enter the market leading to loss in margin.


> But there's only one Nvidia.

I disagree. Google have their tensor chips – whether they work well enough for those outside of Google is somewhat irrelevant, they clearly work for Google who are going to be one of the major players in serving AI for the forseeable future (I'm biased, but seems clear to me). Microsoft have their own chips on the way, rumoured to be this year. Amazon have their own chips on the way, rumoured to have been in development for a number of years.

All 3 of these companies sell Nvidia hardware on their cloud offerings because it's what buyers want, but with Nvidia's pricing and resource constraints there is a huge pressure on the cloud providers to push customers into their own offerings. I don't expect any to stop offering Nvidia chips directly, but I'd bet that in the next few years all of their hosted value-add services will be on alternatives (i.e. hosted AI, inference, training, etc where customers don't need to know the hardware).

Nvidia have more of a moat than OpenAI for sure, but I think Nvidia's best days are 2023/4, and that things will look very different soon.


With the pace of innovation at Google this is absolutely not guaranteed. Speaking from historical perspective it is much more likely that Alphabet acquires another startup or small ASIC-based company which got right the interconnect part. And still - it is not guaranteed. This Gemini thing tries to launch like three times already, but can't even blimp in comparison with OAI. Besides OAI is more or less MS.


I'm talking specifically about chips here. Google has been developing their tensor stuff for a while now, and it's been publicly documented as running much of their training and inference stacks. The fact that they are getting value out of it in models competitive with others suggests that the chips are basically a success, does it not?


Well there are chips and chips and at this moment we know little about the amount of Google TPUs used and performance they have.

Similarly when IBM manage to start producing volumes of their neuromorphic chip, Google’s TPUs, Nvidia and even groq LPUs may seem obsolete.


That's fair, we don't know. I guess my point is that with all 3 of the main cloud providers with their own chip programs, and with Google having a proven track record of training/serving competitive LLMs on theirs, I'm not that bullish about Nvidia at anywhere near the current prices.

I think this is most likely a temporary blip. GPUs were a bit of a commodity 5 years ago, with Nvidia, AMD, and Intel all producing reasonable stuff. Large AI accelerator chips weren't much of a market ~5 years ago, Nvidia were first to take the market, but in a few years time they'll also be back to commodity status.

Nvidia have a small moat with CUDA, but their eye-watering prices are a huge incentive for users to try alternatives, and ultimately the current price is built on them being the only provider of GPUs with 40/80GB of memory. That's the fundamental enabling technology, and that's not particularly tricky for competitors to replicate.

Nvidia may be the "best" AI accelerator chips on the market for years to come, but being 20% better and 20% more expensive than AMD, and all the cloud providers using their own in-house chips where they can, is not a $1.8Tn company as far as I can tell, it's much more like what Nvidia were ~5 years ago.


well, you make a fair point. let me also raise the question - why were Intel, AMD and alike so slow with developing custom APUs for ML tasks. this sounds incredibly short-sighted. i mean - the ML area has been developing steadily for at least 20 years, with vast amounts of new stuff coming since 2013 perhaps. this makes 10 years, and I believe the dev cycle for new chips is potentially around a decade.

so question is: where are these guys' ML chips? sorry but AVX512 is not something that provides enough juice, and apparently some smart-head at Intel decided to lock end-users out of it?

because, honestly, it was not NVidia who kicked the GPUs forward, but these brave CUDA devs who actually created some valuable software to run on top of them - first for crypto mining, then for the LLMs and NNs in general.

honestly - i start to really despise this company, even though there's a 3090TI in my home box. and with the most recent talk given by CEO - fingers crossed someone comes and eats their lunch, they so much deserve it.


Current valuations of companies don’t necessarily need to reflect the future earnings in a smooth curve. For example if I believe that theirs is a 50% chance that someone will catch up to nvidia in 20 years this should still be priced way above 20x because they might not. A lot of investors might feel that way and it might be another decade until the stock price drops significantly.


> fairly good chance someone would catch up to them in 10-20 years.

I think that's the bet. Maybe someone could catch up, but what are the chances incumbents buy new entrants or otherwise defeat them? What's the chance regulators will get involved? If you rate those favorably for NVIDIA, you'll price it higher.


Notably, software is the moat.

There’s more hardware than nvidia but it takes some time to unwrap the proverbial matmul from CUDA. Much more hardware is in the pipeline, it just (ahem) has to be much better than nvidia to make migration from CUDA worth it. groq is one such limited but extremely impressive example IMHO.


NVDA has established effectively a monopoly status in the ML training hardware business and you cannot get into there without putting billions of upfront costs. OpenAI is far from there and we're seeing lots of competitors rapidly rising from both big techs and startup scenes.


OpenAI is supplying the software underlying the productization. I would consider it far more likely that Meta or Google releases free software that becomes the new standard than that AMD/ARM/Intel manage to steal a significant amount of NVidia's business. And even if AMD came out with something just as good tomorrow, it's not like 2x the production capabilities would significantly hurt the price point for AI hardware.


When there's a gold rush, do you become a prospector or do you start selling shovels, pickaxes, and hotel rooms?


Came for this. Shovel sellers have a guaranteed ROI which is lower than the speculative high but far higher than the vast majority of returns to find gold.

NVIDIA make bank for every single failed AI venture, if they buy their chips.


20x is roughly the revenue multiple between nvidia and open ai and they are businesses with similar margins.


I think the logic is roughly: _some_ people that want to deploy LLMs for things will pay OpenAI. _everyone_ that wants to deploy LLMs for things will pay NVDA.


This is what makes the the big players seriously pissed and has been since the first convnet revolution. NVIDIA’s customers are dying to see worthwhile competition.


Nvidia has a monopoly on Cuda. If they decided to sell cards for x10 what they are now no one will have a choice but to pay up for the next year.


Forget the multiplier compared to OpenAI, Nvidia's valuation compared to their own financials are completely insane. I'm not sure where we lost the road, but valuations are completely disconnected from profits and total revenue.


NVIDIA can print money as long as TSMC can make their chips


The chips are a commodity at that point, why shouldn't investors follow a fairly standard valuation method based on revenue or profits?


They're not commodities at all, by definition. No one but NVIDIA can make their GPUs, no one but TSMC can make the silicon, and no one can compete with CUDA on the software side. It's one of the most differentiated products on the market.


That depends on your definition of a commodity, dome definitions effectively limit it to raw materials while others define it as any economic good [1].

If Nvidia can truly print money as long as TSMC can make chips, as claimed above, I'd argue that's at least in a gray area for commodities. The chip is effectively a raw material as far as Nvidia is concerned, and it acts much like gold or oil in that scenario as the argument is that there is an unlimited market willing to buy any GPU that Nvidia can create.

Having a monopoly doesn't mean the product itself isn't a commodity. De Beers has a monopoly on diamonds, but I'd expect diamonds to fit into most people's definition of a commodity.

[1] https://en.m.wikipedia.org/wiki/Commodity


It's not "any" economic good, it's any "fungible" economic good. The first line of your linked wikipedia article: In economics, a commodity is an economic good, usually a resource, that specifically has full or substantial _fungibility_: that is, the market treats instances of the good as equivalent or nearly so with _no regard to who produced them_. (Emphasis mine)

That's the definition everyone in economics uses and NVIDIA has never fit that definition. NVIDIA GPUs are not fungible. Intel and ATI GPUs are not replacements as far as AI is concerned. There are zero substitutions. They are not commodities, they are differentiated goods that no one else can produce.

> Having a monopoly doesn't mean the product itself isn't a commodity. De Beers has a monopoly on diamonds, but I'd expect diamonds to fit into most people's definition of a commodity.

Yes because the definition of a commodity is fungibility not the number of vendors. You can go online to Alibaba now and buy a bag of synthetic diamonds that are perfect replacement for mined diamonds. De Beers never really had a monopoly, they just controlled the market for consumer diamonds until the 80s.

NVIDIA has a monopoly because its product is not a commodity.


This is such a boring, semantic argument

NVIDIA’s products can colloquially be described as commodities in some contexts, as GPUs from NVIDIA can be interchangeable with GPUs from other manufacturers like AMD. For specific tasks like gaming or basic computing, the brand may not matter as much as the specifications, making them somewhat commodity-like in those scenarios

Whilst this isn’t the traditional economic definition of commodity, speaking loosely, I think it’s fair enough to describe GPUs as a form of commodity. The important thing is what’s being communicated, not the semantic definition. The above comment’s point was pretty clear IMO


> The important thing is what’s being communicated, not the semantic definition. The above comment’s point was pretty clear IMO

But the argument is, that Nvidia should follow a standard valuation model, since their product is a commodity.

Those claims are both incorrect. This either rests on a misunderstanding of what a commodity is, or on a misunderstanding of Nvidias Position in the AI segment.

As it stands at the moment, they are not a commodity in this field, they can not be replaced, and thus you cannot apply a more standard valuation model.


You are stuck on word commodity without understanding why it is important in valuation. What parent is saying some companies have no pricing power since there products can be replaced with close to 0 difference.

Nvidia is not valued as a commodity because now and in near future people expect them to keep their superiority that allows them to charge 30000+ for one card.


> as GPUs from NVIDIA can be interchangeable with GPUs from other manufacturers like AMD

For AI workloads I'm pretty sure they aren't, which is why everyone is trying to buy NVIDIA GPUs.

If they were a commodity there would not be such intense competition for NVIDIA GPU allocations, because they would be easily interchangeable with GPUs from another source.


OpenAI P/E is 415

Nvidia P/E is 95.

OpenAI's valuation is 4 times more insane.


I wasn't meaning to say OpenAI's was better, only that we can remove the complexity of comparing thr two companies and Nvidia valuation is still insane.


OpenAI Global LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. Nvidia made revenue 45B in revenue 2023 and much more this year.

OpenAI P/E is 415

Nvidia P/E is 95.

OpenAI is valued over 4 times Nvidia based on revenue.


OpenAI doesn't seem to have a moat as big as Nvidia (I believe AI tech company valuations are in the hype phase)


How long do hype phases usually last on average?


until ML stops producing cool new things, i'm pretty bullish on ML, to me even the hype feels justified (as opposed to crypto and NFTs which i always thought would fail eventually)


NVDA can make things that no one else can. It's not clear that OpenAI is doing things that no one else can (seems to be more directly related to your ability to wield compute).


> NVDA can make things that no one else can

What about AMD (MI3000), Google (TPU - actually designed by Broadcom).

It's really TSMC (who actually make NVIDIA's chips, as well as those for AMD, Google, Apple ...) who are close to making things that no-one else can, although afaik Samsung are close.


The problem with OpenAI is that their products have no actual intelligence. They are radically over-valued for the statistical products they make that are entirely unreliable in terms of correspondence with reality. Nvidia is making chips and selling them and they work.


> The problem with OpenAI is that their products have no actual intelligence.

While what they sell isn’t intelligent, it has utility and that’s worth something. Since their product seems to be at the forefront and they’re really the trendsetter with others playing catchup, they can command a premium.


Who is more replaceable?

The eventual winner will almost surely be worth more than Nvidia when the rubble clears, but they, and all the losers and midfielders won't get to that day without forking over billions of their dollars to....


There's going to be an insane rug-pull at the end of this. Nvidia funds coreweave who buys gpus and sells them to Microsoft. Microsoft invested in openai who rents the gpus from Microsoft.


Remember Wintel and how the combinations of software and hardware lock-in created incredible shareholder returns? Well Nvidia is like if Wintel was a single company.


NVDA controls production and distribution of a limited resource.

OpenAI has no definitive control on the service they provide, eg any one that can provide more value or cheaper on Nvidia hardware is a direct competitor that may be able to afford to pay more for hardware, limiting OpenAI access to its critical ressource.

Hence nvidia is the elephant that structurally takes the money, the ones that make the safest big returns on long term.


I think everyone just disbelieves the stated goals of the AI startups as a matter of convention.

OpenAI wants to create machines that are more capable at almost all economically valuable labor than humans. Such technology would be incredibly valuable.

If it seems likely they’ll achieve it and actually return a significant part of the created value to investors, valuations that are crazy by old metrics will follow.


NVDA has a wee bit more of a moat around their business than OpenAI does. It’s not even comparable.


NVIDIA doesn't have competition


at the moment

I would be curious when an educated-on-the-matter person's opinion was best guess on when NVIDIA's competition will catch up


AMD already released their MI300X which is objectively better than NVIDIA's offerings. Just a matter of time for adoption and integration.


2030 Nvidia will have mainstream competitors in the cloud space. Amazon especially hates writing checks to anyone that also isn’t Amazon.


Probably ~3-4 years as Apple, AMD, and Google/AWS ramp up their GPUs/custom accelerators.


And groq


I'm going to guess that groq gets bought soon.

oh, and don't forget Graphcore (also rumored to be bought soon): https://www.tomshardware.com/tech-industry/artificial-intell...


I wouldn't guess, I am definitely certain that both Groq and Graphcore will be bought out soon.


My cloud notebook provider just notified me that they'll stop supporting graphcore's IPUs. I was wondering if that is because nobody is using them, or for other reasons.


For the last five years or so, openai’s thesis has been that scaling really really big models pays off. That is, that throwing massive amounts of hardware at the problem parts off. Chips are the whole thing that enables scaling.


OpenAI is not public so there is less pump. You could say both are overvalued. Take the ride or wait the dump. It curious that even with the current rates we still have a lot of pump in the market.


NVidia does a whole lot more than make GPUs for AI.


OpenAI needs OpenAI to succeed. Nvidia just needs AI in general to remain in the spotlight, no matter what company.


Reminder that a specialized AI chip designed to just maximize the transformer architecture would only require decades-old tech that's well spread out at this point and far simpler to manufacture. Basically as many XOR matrix multiplications as you can squeeze in on a chip.

NVDA may yet see some competition, as we drop our complexity standards and embrace a one-size-fits-all general MLLM architecture.


They are the prime example of selling shovels in a gold rush.


Anyone have any idea what order of magnitude this windfall is for the employees? We talking hundreds of thousands? Millions? Depending on when you got hired, this has gotta be one of the largest $increase/time to exercise options in history, no?


Don't they have a 10x value cap or something? I asked chatgpt to research this on the web

> The PPUs function similarly to Profit Interest Units (PIUs), granting employees a share in the company's profits rather than traditional equity. These units vest over four years, providing a long-term incentive for employees to contribute to OpenAI's success. A distinctive feature of OpenAI's PPUs is the growth cap; specifically, the PPUs are capped at a 10x growth of their original value. This means if an employee is given PPUs valued at $2 million, the maximum they can sell them for would be $20 million, ensuring that the rewards are substantial yet aligned with OpenAI's broader mission of equitable benefit .

This valuation isn't exactly crazy if you ask me. OpenAI is valued at 30B about an year ago. Now 3x. If you bought nvidia stock last year this time, it also gets 3x.


If you are an L4 (basically starting at OAI), the median PPU package is $315k [0] and that's valued from the last tender offer round. So that means an exit of ~$945k.

[0]: https://www.levels.fyi/companies/openai/salaries/software-en...


That's assuming employees are allowed to sell 100% of shares. Often tender offers have a cap - e.g. 10%. Not saying that's the case here, just mentioning it might be much less than you think.

That being said, it's hard for me to believe many employees don't have 10x the amount you mentioned, and will be able to get ~1m worth for 10% of their shares.

I could be 100% wrong on both fronts - but it's my 2 cents


I just learned that there is 2 year lock-in, but otherwise I think it is unrestricted.


Good payout but ~40% going to taxes. A nice down payment for a house in the Bay Area.


Low millions


Were the Profit Participation Units (PPUs) part of this sale?

I've heard that's what they are offering new hires but it's an unusual form of compensation as compared to traditional stock/options. Curious how it worked out for those folks.

(apologies if this is explained in the article, I don't have access)


Can someone correct me on why PPUs are not a scam when the company could simply reinvest all profit into R&D and mark it down as opex?


Isn't stock also a scam since they don't have to issue dividends?


You don't have to buy them.


I believe that PPUs are all that is up for sale bc MS has 49% stake


Wow, we’re really doing the singularity utopia or global nuclear warfare strategy.


I'm not sure when investors are going to get that tech bros are fleecing them for billions and not making anything that actually helps people or the world. It's like there's no adults in the room. Just bobble head monopoly men hoping if they give away enough cash, someone's problem will finally get solved. But I am little jealous that I'm not also getting rich on flash-in-the-pan tech startups.


Investors are the ones fleecing everybody. They have turned startups into pump and dump schemes.


Google research is throwing plenty of R&D into the use of ML in chemistry, biology, and medicine. Those projects are long term and HN does not pay enough attention.

But even Google management was scared that OpenAI will corner the market in a bullshit generation for consumers. They did bring some big guns from doing real work to fight back.


> not making anything that actually helps people or the world

Didn't you see Sora?


Is this related to OpenAI? OpenAI is obviously valuable to the consumers that use their products.


Investors who can’t spot a scam run out of money. Natural selection. They aren’t dumb.


Tim Draper was an early investor in Theranos, continued to defend them even after reports showed their product was bogus, and he's still worth over a billion dollars.


What exactly do the purchasers get? If seems pretty muddy to me.


Why can’t they sell at $7 trillion valuation?


If OpenAI succeeds with its mission, this will be a terrible decision for sellers apart from diversification necessary to feel financially secure.


I hope they were all willing to do it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: