Yes, except that in modern infra i.e. WiFi 6 is 1024-QAM, which is to say there are 1024 states per symbol, so you can transfer up to 10bits per symbol.
This would help a lot. Many European startups are strongly local (also in talent search), because while moving is simple, share distribution and ownership structures are anything but, and investors usually don't want to bother with local regulation on that they don't even know.
I agree. It's funny that this is one of the cited reason for the (relative) value suppression of tsmc, but the same factors should apply to Nvidia too.
it's sad that this is not the default behaviour. hopefully the stop killing games movement will put something similar into law with potentially further-reaching side-effects eventually. Because frankly, sunsetting products like this should be common sense, not the exception it currently is.
The base material needs to be of a minimum quality for that experience to be enjoyable, I presume. I totally see the value of that, which is one of the reasons to sometimes reduce playback speed below what my default would be. Writings from Tolkien are maybe some of the best suited for that, but I'm not 100% convinced it would work for what I'm reading currently, so I might just try it anyway - the Foundation series from Asimov.
well, this is where being pedantic bites me in the a* again. Our codebase has been mostly pyright-focused, with many very specific `pyright: ignore[...]` pragmas. Now it would be great if ty (pyrefly has an option!) could also ignore those lines. There's not _that_ many of them, but .... it's a pain.
yeah it's only correct in 99.7% of all cases, but what if it's also 10'000 times faster? There's a bunch of scenarios where that combination provides a lot of value
Ridiculous counterfactual. The LLM started failing 100% of the time 60! orders of magnitude sooner than the point at which we have checked literally every number.
This is not even to mention the fact that asking a GPU to think about the problem will always be less efficient than just asking that GPU to directly compute the result for closed algorithms like this.
Correctness in software is the first rung of the ladder, optimizing before you have correct output is in almost all cases a complete waste of time. Yes, there are a some scenarios where having a ballpark figure quickly can be useful if you can produce the actual result as well and if you are not going to output complete nonsense the other times but something that approaches the final value. There are a lot of algorithms that do this (for instance: Newton's method for finding square roots).
99.7% of the time good and 0.3% of the time noise is not very useful, especially if there is no confidence indicating that the bad answers are probably incorrect.
Abstract—WhatsApp, with 3.5 billion active accounts as of
early 2025, is the world’s largest instant messaging platform.
Given its massive user base, WhatsApp plays a critical role in
global communication.
To initiate conversations, users must first discover whether
their contacts are registered on the platform. This is achieved
by querying WhatsApp’s servers with mobile phone numbers
extracted from the user’s address book (if they allowed access).
This architecture inherently enables phone number enumeration,
as the service must allow legitimate users to query contact
availability. While rate limiting is a standard defense against
abuse, we revisit the problem and show that WhatsApp remains
highly vulnerable to enumeration at scale. In our study, we were
able to probe over a hundred million phone numbers hourly