> But I don't think any are operating inference at a loss, I think their margins are actually rather large.
Citation needed. I haven't seen any of them claim to have even positive gross margins to shareholders/investors, which surely they would do if they did.
> “if you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue. There’s some cost to inference with the model, but let’s just assume, in this cartoonish cartoon example, that even if you add those two up, you’re kind of in a good state. So, if every model was a company, the model, in this example, profitable,” he added.
“What’s going on is that while you’re reaping the benefits from one company, you’re founding another company that’s much more expensive and requires much more upfront R&D investment. The way this is going to shake out is that it’s going to keep going up until the numbers get very large, and the models can’t get larger, and then there will be a large, very profitable business. Or at some point, the models will stop getting better, and there will perhaps be some overhang — we spent some money, and we didn’t get anything for it — and then the business returns to whatever scale it’s at,” he said.
This take from Amodei is hilarious but explains so much.
Citation needed. I haven't seen any of them claim to have even positive gross margins to shareholders/investors, which surely they would do if they did.