Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

None of that matters to this point, though I'd dispute some of it if I thought it did.

"Can they keep charging money for it?", that's the question that matters here.





It matters to the comparison being made between the dot com boom and an ai boom, they have completely different fundamentals outside of the hype train.

There were not as many consumers buying online during dot com boom.

To the extent currently more is being spent on AI than anything in the dot com boom.

Nor did companies run their businesses in the cloud, because there was no real broadband.

There’s no doubt there’s a hype train, there is also an adoption and disruption train, which is also happening.

I could go on, but I’m comfortable with seeing how well this comment ages.


I don't pay anyone for an image generator AI, because I can run an adequate image generator locally on my own personal computer.

My computer doesn't have enough RAM to run the state of the art in free LLMs, but such computers can be bought and are even affordable by any business and a lot of hobbyists.

Given this, the only way for model providers to stay ahead is to spend a lot on training ever better models to beat the free ones that are being given away. And buy "spend a lot" I mean they are making a loss.

This means that the similarity with the dot com bubble can be expressed with the phrase "losing money on every sale and making up for it in volume".

Hardware efficiency is also still improving; just as I can even run that image model locally on my phone, an LLM equivalent to SOTA today should run on a high-end smartphone in 2030.

Not much room to charge people for what runs on-device.

So, they are in a Red Queen's race, running as hard as they can just to stay where they are. And where they are today, is losing money.


You don't need RAM to run LLMs. Your graphics card does.

The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.

I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.

At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.

The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.

Let alone ones with credit card processing.

Internet users by year: https://www.visualcapitalist.com/visualized-the-growth-of-gl...

The ecommerce stats by year will interest you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: