Fwiw in this application you would never need to divide by an arbitrary integer each time; you'd pick it once and then plumb it into libdivide and get something significantly cheaper than 8-30 cycles.
You realize that AI is driving huge advertising growth at Meta, right?
> Meta, the parent company of Facebook and Instagram, reported strong second-quarter 2025 earnings, driven primarily by robust advertising revenue growth. Total revenue reached US$47.52 billion, up 22% from last year, with advertising accounting for $46.56 billion, an increase of 21%, surpassing Wall Street expectations. The growth was fuelled by an 11% rise in ad impressions across Meta’s Family of Apps and a 9% increase in the average ad price. Net income climbed 36% to $18.34 billion, marking ten consecutive quarters of profit outperformance. The Family of Apps segment generated $47.15 billion in revenue and $24.97 billion in operating income, while Reality Labs posted a $4.53 billion operating loss.
> Much of this growth is credited to Meta’s AI advancements in its advertising offerings, such as smarter ad recommendations and campaign automation. Currently, over 4 million advertisers use the AI-powered Advantage+ campaigns, achieving a 22% improvement in returns. Building on this success, Meta plans to enable brands to fully create and target ads using AI by the end of 2026.
Why would you want an MPMC queue primitive, instead of just using MPSC per consumer? I don't really see a discussion of the cache contention issues. (There is some mention of contention, but it seems the author is using the word in a different way.)
It looks like both enqueue and dequeue are RMWing the "epochs" variable? This can pretty easily lead to something like O(N^2) runtime if N processors are banging this cache line at the same time.
For me, I’ve got use cases where it’s valuable to keep event data interleaved because it will also get used in flight. It works well enough I also use it for things where it’s not necessary like in memory debug rings (which requires a bit of additional work).
The epoch isn’t CAS’d; it is FAA. The epoch is then used to determine if there is contention due to the tail meeting the head, or due to a wrap-around due to slow writes.
There’s also a back-off scheme to ease contention for a full queue.
Though, I did originally have a variant that adds a fairly straightforward ‘help’ mechanism that makes the algorithm wait-free and reduces the computational complexity.
However, the extra overhead didn’t seem worth it, so I took it out pretty quickly. Iirc, the only place where the ring in the queue wouldn’t out-perform it are on tiny queues with a huge write imbalance.
If you go run the tests in the repo associated w the article, you probably will see that a ring with only 16 entries will tend to start being non-performant at about a 4:1 writer to reader ratio. But iirc that effect goes away before 128 slots in the ring.
There, the ring still fits in a page, and even with a big imbalance I can’t remember seeing less than 1m ops per second on my Mac laptop.
Real world observational data beats worst case analysis, and I’ve never seen an issue for scenarios I consider reasonable.
But, if my unrealistic is realistic to you, let me know and I can break out the wait free version for you.
fetch_add requires taking a cache line exclusive. If producers and consumers are both doing this on the same cache line, it is very expensive and does not scale.
(withinboredom and gpderetta have raised the same or similar concerns.)
The Monroe Doctrine was about preventing colonial powers from enacting NEW efforts to reach into the Americas, not about getting rid of previous control.
"The occasion has been judged proper for asserting, as a principle in which the rights and interests of the United States are involved, that the American continents, by the free and independent condition which they have assumed and maintain, are henceforth not to be considered as subjects FOR FUTURE COLONIZATION by any European powers." (emphasis mine)
Yeah, you can visit the EU by… sailing a ways Northeast(ish) from Maine, until you’re just south of (a part of) Canada. And by going to the Caribbean. And South America.
My understanding is that going to hybrid actually allowed Toyota to significantly simplify their transmissions relative to ICE vehicles, even without going full EV.
The planetary gear "eCVT" systems that Toyota and Ford use in many models are mechanically a lot simpler than a traditional automatic or sequential manual transmission. Few moving parts and no clutches at all. I don't know what the long term reliability of those drivetrains is is but I wouldn't be surprised if it's measurably measurably better than a traditional transmission + engine. There's a long educational video from Weber State University that gives a good walkthrough of what's going on in those things.
As someone who was raised by aggressive alcoholics, and I have myself struggled with weed addiction in the past (and seen weed addiction in others), it's really difficult to compare the substances. Yes, weed dependence is bad and people need to be aware of it, but alcohol (and I'd even say nicotine, but that's a different subject) are far more insidious than weed
Could you elaborate on the nicotine thing? I would say alcohol is far, far, far more insidious than both nicotine and cannabis; I would also say nicotine is less insidious than cannabis.
I guess I was considering the effects it has on other people. Sure, nicotine is far more addictive, but I've never heard of a parent being abusive because they had 1 too many cigarettes.
I guess what I mean is, nicotine is more self-contained than the others.
Yeah, nicotine is a mild stimulant; it's really not a big deal, which is why it is mostly tolerated. The bad interpersonal effects actually come from stopping nicotine, which makes people grumpy, but it doesn't last very long.
The problem is that it's a slow burn because it's consumed by smoking, and this is really the most pleasant way to consume it. People don't like the externalities associated with it, and that's pretty much it.
What's the difference? It takes years of ramping up to become an alcoholic. That's why you don't have many teenagers who are alcoholics. Weed? All it takes is one puff and you are hooked for life.
Sort of similarly, I'd like to see more use of sandboxing in memory-safe language programs. But I don't see a ton of people using these OS primitives in, e.g., Rust or Go.
There's a need for some portable and composable way to do sandboxing.
Library authors you can't configure seccomp themselves, because the allowlist must be coordinated with everything else in the whole process, and there's no established convention for negotiating that.
Seccomp has its own pain points, like being sensitive to libc implementation details and kernel versions/architectures (it's hard to know what syscalls you really need). It can't filter by inputs behind pointers, most notably can't look at any file paths, which is very limiting and needs even more out-of-process setup.
This makes seccomp sandboxing something you add yourself to your application, for your specific deployment environment, not something that's a language built-in or an ecosystem-wide feature.
I think Rust is great for sandboxing because of how Rust has basically no runtime. This is one of the nice things about rust!
Go has the same problems I’m describing in my post. Maybe those folks haven’t done the work to make the Go runtime safe for sandboxing, like what I did for Fil-C.
Sure, but even just setuiding to a restrictive uid or chrooting would go a long way, even in a managed runtime language where syscall restrictions are more challenging.
reply