Hacker Newsnew | past | comments | ask | show | jobs | submit | kiratp's commentslogin

This is missing a key part of the picture - Nvidia just announced that partners will need to source RAM themselves.

OpenAI is basically ensuring that they can actually get the chips they need for the DCs they are building.

I can’t guess as to what move came first (Nvidia policy change or these DRAM deals) but I would bet this is a large if not larger factor here than “bloc my competitors.


A for loop has a conditional in it.

Unless by conditionals we mean “no if/else” and not “no branch instructions”.


The conditional here only makes it stop when it reaches 100. The solution can be adapted to use a while loop if you’re okay with it running indefinitely.


A loop either never halts or has a conditional. I guess a compiler could elide a “while True:” to a branch-less jump instruction.

One hack would be to use recursion and let stack exhaustion stop you.


Count down i from 100 to 0 and do 1/i at the end of the loop :)


Other would be to use goto (though Python doesn't have it) & introduce something that will panic/throw exception, like creating variable with value 1/(max-i).


Recursion would work for that, you don't need goto.

The division by zero trick is pretty good too!


You could allocate 100 bytes and get a segfault on 101


A for loop has an implicit conditional in its stop condition check.


I could see that both ways. Python’s for loops are different than, say, C’s, in that they always consume an iterator. The implementation is that it calls next(iter) until it raises a StopIteration exception, but you could argue that’s just an implementation detail and not cheating.

If you wanted to be more general, you could use map() to apply the function to every member of the iterator, and implementation details aside, that feels solidly in the spirit of the challenge.


The dirty solution I wrote in Powershell does something similar:

1..100 | % {"$_ $(('fizz','')[$_%3])$(('buzz','')[$_%5])"}

I am not sure that using [$_%3] to index into a two-value array doesn't count as a "disguised boolean" thought.


This only applies to large employers. Smaller ones are just presentef a limited list of plans to pick from, and the plans change every year. Most of the time, as a startup, you can’t buy a Mag7 equivalent health plan for any amount of money off the marketplace


It depends. If your employer is part of a self-funded group of other employers, then there is a group of trustees from all the employers that can approve.

If it's a 'fully insured' group plan then the insurance company is technically in charge, but your company can do an Employer-paid exception (aka carve-out reimbursement) to cover something thats getting rejected. They also have the option to purchase add-on policies to add coverage for upper class stuff like fertility treatments, weight loss drugs, or gender-affirming care.


Yeah, I work for a smaller company. I'm not sure which options they omitted but I don't think have the same bargaining power as a BigCo.


Mag7 surely is self insured. They have an amazing risk pool of young people. Probably biggest cost is babies. So in this way employer sponsored health insurance screws the rest of the market, as it "hordes" the best risks. The insurance companies then wail about the cost of the risker pool of those of us stuck in the smaller plans...


There should only be one risk pool which is the whole country. Unfortunately the republicans want to go the other way and push sick people into high risk pools which will be unaffordable for a lot of people


Should the app builder’s ability to “trust” that the hardware will protect them from the user supersede the user’s ability to be able to trust that the hardware will protect them from the app?

In other words, should the device be responsible to enforcing DRM (and more) against its owner?


The kind of people in these small teams are not ones to think "work is just work".


So much negativity.

I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.

Meta could have just done a stock buyback but instead they made a computer that can talk, see, solve problems and paint virtual things into the real world in front of your eyes!

I commend them on attempting a live demo.


> I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.

I am always baffled that people can be that naive.


It's a weird way to put it too, "our industry" and "our culture" enables "our corporations". They're not "our" corporations as a society, why should we be excited about their investments.

There's a cognitive dissonance between talking about capitalist entities that supposedly drive social and technological progress, and the repeated use of the collective "our" and "us". Corporations are not altruistic optimists aiming to better our lives.


He's the CEO of a multi-billion dollar corporation, promising technology that puts the livelihoods of millions of people at risk. He deserves every bit of scrutiny he gets.


> that puts the livelihoods of millions of people at risk.

what livelihood are these glasses putting at risk?


I was referring to AI/LLMs in general.


Do you think his algorithms have kind of, you know, sewed the seeds of hatred and rage through our society?

Do you think all the lies an misinformation his products help spread kind of...get people elected who take away the aid which millions of women and children rely on?

Not blaming him for it all, we all play our part, but the guy has definitely contributed negatively to society overall and if he is smart enough to know this, but he cannot turn off the profit making machine he created so we all suffer for that.

The parent said alluded to the dangers of AI, well the algorithms that are making us hate each other and become paranoid are that AI.


Yes, the mocking, gleeful negativity really does make me concerned that this place is becoming Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.


But, I mean… it’s just not good. There is no real way to spin this as anything better than embarrassing.


But why even have a conversation at all? Who cares if Zuckerberg has a demo that goes awry? Does that satisfy your intellectual curiosity somehow? It certainly doesn't satisfy mine.


Most of the discussion here is about (i) what might have gone wrong, technically, and (ii) what this says about the ROI that Facebook and other US tech giants are getting on their AI projects.

I agree that one demo gone awry does not mean much in itself, but the comments here do rise above the level of Nelson Muntz.


I mean, I think it is notable that a massive tech company is blowing tens to hundreds of billions on this complete nonsense.


A topic that's notable but has little to discuss can get a lot of upvotes. A lot of the best stuff on this site has exactly that: lots of upvotes, little commentary, because the thing itself is notable. That's definitely not what happens on Meta threads. They usually get lots of votes then lets of repetitive spam-like comments about how ethically bankrupt Zuck or Meta or social media or algorithms or whatever are. A new comment on the behavior would be interesting, but most of the comments are basically just spam. I could probably get an LLM to generate most of them with ease. Perhaps the worst part is, even if there is novel analysis it's buried under an avalanche of "Zuck will grind you to dust" or whatever that gets repeated over and over again.

For a while that was okay, this kind of stuff was just contained in those threads. But it's started leaking out everywhere. Just spam like comments tangentially related to the topic that just bash a big company. That's the lowering of SNR that I find grating.


Oh, no, is someone being mean to the big company? :(

This is absolutely notable, and everyone should be concerned about it. Not so much the potential fakery, but the extreme deficiency of the actual product, which has had the GDP of a small country squandered on it. Like, there is a problem here, and it will have real-world fallout once the wheels fall off.


I've been here for over a decade. It has become very reddit like in the past few years.

I want to get into YC just to use and browse Bookface instead.


The signal to noise ratio certainly became worse.

You'll see the same folks spamming their hatred towards tesla/microsoft/meta/google over and over with zero substance other than sentimental blabbering.


I"ve not seen that honestly. I think you are looking for it satisfy your internal narrative you've created.


You haven't seen people, on this site, hating on Tesla?


i haven't seen 'spamming'


It's a different world than it was ten years ago. Among the ways it's different are people are far more skeptical of billionaires, Big Tech, and capitalism generally. They're willing to cut them much less slack. This is one of the few ways that the world of today is better than the world of ten years ago.


[flagged]


The idea that anti-AI posts on HN are PR hit jobs (paid for by..?) strikes me as conspiracy theory.

The simple reality is that hype generates an anti-hype reaction. Excessive hype leads to excessive anti-hype. And right now AI is being so excessively hyped on a scale I’m not sure I’ve seen in all the years I’ve been working in tech.


That explanation doesn't make any sense because it explains none of the facts


When people get something shoved in their face day in day out some of them react negatively to it. Is the concept really so outlandish?


No, but have you seen what people are like when they react negatively? Their behavior is angry and aggressive and unpredictable.

What needs to be explained is a group of people predictably all acting exactly the same way.


You're seeing conspiracies where there are none. A group of people all acting the same way is not suspicious when their actions are the thing that groups them together.


Your theory that there's an invisible hand that makes everyone spontaneously act the same is nonsensical. It hasn't been observed in humans nor in the wider animal kingdom.


> the people funding the anti-AI movement

You can't be serious.


Well the movement exists and they have funding so somebody is funding it.

Unless you think they run entirely on the barter system.


What movement? What funding?


I posted some links in this thread: https://news.ycombinator.com/threads?id=ants_everywhere&next.... Not sure how to link to the specific comment id.

Some of the sites have funding and programming.

Politico also reports that "billionaires" (they name a couple) are funding "AI doomsayers" and have created registered lobbying groups: https://www.politico.com/news/2024/02/23/ai-safety-washingto...

For the record I'm in favor of AI safety and regulating the use of AI. I don't know anything about the particular bills or groups in the Politico article though. But it's clear evidence that people with money are funding speech that pushes back against AI.

The funding and use of campaigns to amplify divisive issues is well known, but I'm not claiming this is a source of anti-AI funding. You may perhaps believe that AI does not count as a divisive issue and so there are no anti-AI campaigns through this funding model. I would find that surprising but I don't know of a source yet that has positively identified such a campaign and its sourcing. There were similar campaigns against American technological domination such as the anti-nuclear movement which received a lot of funding from the pro-nuclear Russian military during the cold war. And the anti-war movement which received a lot of funding from the pro-war Russian military during the Vietnam war. Similarly the US has funded "grass roots" movements abroad.

To be clear I'm not saying the anti-AI movement is similarly funded or organized. But it is clearly a movement (and its adherents acknowledge that) and it clearly has funding (including some from very wealthy donors). And they do all use similar stock phrases and examples and often [0] have very new accounts. Everything in the current paragraph is verifiable.

[0] by often, I mean higher than the base rate topic for HN. I don't mean more than 50% of the time or any other huge probability.


Quick question: if there is a paid anti-AI movement, where do I send my invoice? May as well not leave money on the table.


I love this


I don't love your silly theory. It sounds like you're in denial, trying to cope with the fact that not everyone thinks LLMs are the greatest thing since flush toilets.


Bots bots bots... tearing down our stars is good business for a variety of vested interests. Don't let the bastards grind you down.


We are not bots, we just loathe historically bad-faith actors and especially with the current climate, we will take the opportunity of harmless schadenfreude where we can get it.


Oh please. This isn't like the old iPhone days where new features and amazing tech were revealed during live demos. Failure was acceptable then because what we were being shown was new and pushing the envelope.

Meta and friends have been selling us AI for a couple years now, shoving it everywhere they can and promising us it's going to revolutionize the workforce and world, replace jobs, etc. But it fails to put together a steak sauce recipe. The disconnect is why so many people are mocking this. It's not comparable.


All you're doing here is associating your hopes and dreams with grifters and charlatans.

They should be mocked and called out, it might leave room for actual innovators who aren't glossy incompetents and bullshitters.


But that’s the thing… it’s not a live demo…


I see no evidence of that. It seems like they tried to put the AI “on rails” with predefined steps and things went wrong.


So there was no AI. I know there’s a lot of confusion regarding the exact definition of AI these days, but I’m pretty sure we can this one time all agree that an “on rails” scenario ain’t it. Therefore, whatever it is that they were doing out there, they weren’t demoing their AI product. You could even say it wasn’t a live demo of the product.


You can put the AI on rails by just prompting by it. The latest models are very steerable.

System prompt: “stick to steps 1-n. Step 1 is…”

I can say confidently because our company does this. And we have F500 customers in production.


And a Fortune 500 company has never done anything stupid.


This is due to RoPE scaling.

> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set factor as 2.0.

https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking


> Importantly, we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.

Things they could do that would not technically contradict that:

- Quantize KV cache

- Data aware model quantization where their own evals will show "equivalent perf" but the overall model quality suffers.

Simple fact is that it takes longer to deploy physical compute but somehow they are able to serve more and more inference from a slowly growing pool of hardware. Something has to give...


> Something has to give...

Is training compute interchangeable with inference compute or does training vs. inference have significantly different hardware requirements?

If training and inference hardware is pooled together, I could imagine a model where training simply fills in any unused compute at any given time (?)


Hardware can be the same but scheduling is a whole different beast.

Also, if you pull too manny resources from training your next model to make inference revenue today, you’ll fall behind in the larger race.


Source?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: