Hacker Newsnew | past | comments | ask | show | jobs | submit | more qoez's commentslogin

AI is gonna be even worse. At least there's some competition from scandinavia and germany and france's tech scenes. For AI there's basically none.


Yeah, but unless there's AGI right around the corner, it's starting to look more and more like there's no real moat in the trillions being invested. As switching between AI providers is easy (especially when compare the stranglehold say Microsoft has on organizations), catching up may be relatively easy and cheap for latecomers.


It's definitely written by an AI. The end description of hitchhikers guide is "[...]the meaning of life. Which turns out to be an integer." No one would bother writing that.


Makes no sense since they should have checkpoints earlier in the run that they could restart from and they should have regular checks that keep track if a model has exploded etc.


I didn't read "major failed training run" as in "the process crashed and we lost all data" but more like "After spending N weeks on training, we still didn't achieve our target(s)", which could be considered "failing" as well.


They could have done what Lightricks did with LTX-1 - build almost embarrassingly small models in the open and iteratively improve from learning.

LTX's first model felt two years behind SOTA when it launched, but they viewed it as a success and kept going.

The investment initially is low and can scale with confidence.

BFL goes radio silent and then drops stuff. Now they're dropping stuff that is clearly middle of the pack.


Going from launching SOTA models to launching "embarrassingly small models" isn't something investors generally are into, specially when you're thinking about what training runs to launch and their parameters. And since BFL has investors, they have to make choices that try to maximize ROI for investors rather than the community at large, so this is hardly surprising.


There's always a possibility that something implicit to the early model structure causes it to explode later, even if it's a well known, otherwise stable architecture, and you do everything right. A cosmic bit flip at the start of a training run can cascade into subtle instability and eventual total failure, and part of the hard decision making they have to do includes knowing when to start over.

I'd take it with a grain of salt; these people are chainsaw jugglers and know what they're doing, so any sort of major hiccup was probably planned for. They'd have plan b and c, at a minimum, and be ready to switch - the work isn't deterministic, so you have to be ready for failures. (If you sense an imminent failure, don't grab the spinny part of the chainsaw, let it fall and move on.)


To be fair that's true for everything including the jewels


From a core openai insider who have likely trained very large markov models and large transformers: https://x.com/unixpickle/status/1935011817777942952

Untwittered: A Markov model and a transformer can both achieve the same loss on the training set. But only the transformer is smart enough to be useful for other tasks. This invalidates the claim that "all transformers are doing is memorizing their training data".


Best case is hardly a bubble. I definitely think this is a new paradigm that'll lead to something, even if the current iteration won't be the final version and we've probably overinvested a slight bit.


The author thinks that the bubble is a given (and doesn’t have to spell doom), and the best case is that there isn’t anything worse in addition.


Same as the dot-com bubble. Fundamentals were wildly off for some businesses, but you can also find almost every business that failed then running successfully today. Personally I don't think sticking AI in every software is where the real value is, it's improving understanding of huge sets of data already out there. Maybe OpenAI challenges Google for search, maybe they fail, I'm still pretty sure the infrastructure is going to get used because the amount of data we collect and try to extract value from isn't going anywhere.


Something notable like pets.com is literally chewy just 20 years earlier


I probably wouldn't have been drawn to coding if I was young these days based on the same motivations that led me to venture into it as a teen.


It could be fun in a factorio sense. Maybe the whole game becomes to delegate a bunch of smart robots and handle organization etc.


Must be nice writing stock narrative stories. Always new content every day to make up stories about the cause of why stocks go this way or that


If you don’t think we’re due a massive correction after the hype of AI then I don’t know what to tell you. Every sign is right there.


I am going to show you a chart: https://i.imgur.com/q7l3lJt.png

This is a weekly chart of Nvidia from 2023 to 2024. During that period, the stock dropped from $95 to $75 in just two weeks. How would you defend the idea that a major correction wouldn’t have happened back in 2023–2024? Would you have expected a correction at that time? After all, given a long enough timescale, corrections are inevitable.


I don’t know how to start a reply to you. Because Nvidia stock dipped for two weeks in the past there’s no chance we’re due a massive correction? Makes so sense whatsoever.

Nvidia’s stock price is not the start and end of AI investments. OpenAI is losing over $11bn a quarter. More than they were losing in 2023, and debt accumulates over time. Reality will snap in eventually when investors realize their promised future isn’t coming any time soon. Nvidia’s valuation is in large part due to the money OpenAI and others are giving it right now. What do you think will happen when that money goes away?


For context 11bn in revenue is 3% of Googles annually. Chat gpt has something like 800 million users. It's completely plausible that they'll fizzle. It's also completely plausible they eat Google or facebook and 11bn becomes nothing to them.


Friend, you're seeing signs in tea leaves here.


How about all the signs are _so_ right there that they have been priced in by now?


It will be written with AI, of course. :)

I am also getting annoyed at AI. In the last some days, more and more total garbage AI videos have been swarming youtube. This is a waste of my time, because what I see is no longer real but a fabrication. It never happened.


Just wait until you hear about sports reporting! Or the weather.


We can predict the weather, with extreme reliability, hours in advance!


Just eat lots of beans and lentils. No need for an app or 'fiber gummies'


anything leafy as well, though those won't have the protein boost of legumes


I like to pair those with rice so their amino acids compliment each other.


Lettuce because I am lazy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: