> Is a piece of software really only of value to the open source community if any kind of unscrupulous use of it is allowed?
It's not even open source in the first place if any kind of unscrupulous use of it is disallowed, as that would be discriminating on use case. It ultimately doesn't matter much to the open source community, as it effectively can't be used in otherwise open source projects, as the result wouldn't be open source and it is going to be license-incompatible with many projects anyways.
That said, I find it preposterous to accept this notion even ignoring that point. You shouldn't have to take it on faith that what you're doing is allowed by the copyright license—the whole point of the license is to make that clear. Everybody always shrugs off the risk of a malicious owner until Oracle acquires their dependencies.
I understand that it's not open source. I just see it as like, a spot where a company that would normally make a closed source product wanted to make it more open and hackable and did actually put the code up and take contributions, which should be a kind of good thing, but it's automatically assumed to be the worst, a rugpull, etc. What if I operated in an ethical gray area right around this pretty reasonably worded term?
I was trying to make the point that "unfree" software is not really useful at all to the "open source" community, and not because of terminology nitpicking but because of the consequences that has.
But anyway, my problem with a license like this is indeed the existence of gray areas. Open source licenses are in some ways clever attempts to make a social contract into a legal obligation. It isn't perfect, but the side effect is that you don't have to take it on faith that people will follow it: people can be sued for violating it, and depending on how that Vizio case goes, it's not just the copyright holders who are eligible.
But that's a two way street. In return, I shouldn't have to take in on faith that my use case is legal according to the copyright license: it should be clear as day with no room for interpretation. If it's not, then my best hope is to simply never get sued. That is not good. Hope is not a strategy here, not for individuals and not for corporate users.
Business/"fair" licenses seem to offer a good compromise, but it's a mirage: the software still has to be treated a bit like toxic waste in Linux packaging, won't be compatible with strong copyleft licenses, and ultimately, presents an uneven playing field for contributors.
There isn't much to be excited about from a hacking PoV.
With projects like these, you're probably already going to be submitting your code under an unconditional CLA, which essentially forfeits your rights as a contributor, then if it's this license, you also are giving the original copyright owner more rights to use your contribution than you even have.
I don't think this is a good or healthy status quo at all.
The only upside of this is that it protects someone's business model from competition. Well good for them.
But making the license look like MIT is just a bit of cosplay, yet another attempt to try to push something as being open source when it's not. This cognitive dissonance can't go unnoticed; it really does trick people if they don't fully think through the consequences. You're better off going with a license that makes no attempt to pass itself off as open source.
> And pointless here, since everything runs under the same uid. You need to authenticate this is the same browser that stored this secret, not that this is the same uid (useless), or the same pid, or any other concept that unix domain socket authentication understands.
I disagree. With UNIX domain sockets it is absolutely possible to determine the PID of the process that you are talking to and use pidfd to validate where it is coming from. Would be entirely possible to use this for policy.
> In fact, if you do not need a shared secrets service, and your applications are containerized... why do you need a secrets IPC at all? Just let each program store its secrets in some of its supposedly private storage...
And how exactly does the app container service store something encrypted securely on disk? That's literally the point of a secrets service on a modern desktop. It usually gets keymatter in the form of a user password carried to it from PAM, in order to allow on-disk encryption without needing separate keyring passwords. (And yeah, sure, this could use TPM or something else to avoid the passwords, but the point is literally no different, it shouldn't be each application's job individually to manage their own way of getting secure storage, that's a recipe for data loss and confusion.)
> Much better to have a million non-extendable protocols competing with each other. To this day there are two protocols (at least) for exposing the address of the DbusMenu service of a surface, one for gnome-shell and one for kwin. So much for the uglyness of X atoms. And this has nothing really to do with the design of the IPC mechanism itself...
That's a problem that occurs because the protocols have multiple distinct implementations. Most of the dbus services don't have to deal with that problem at all. (And the ones that do, tend to have problems like this. There are plenty of weird incompatibilities with different XDG desktop portal implementations.)
I'm pretty sure the point of bringing up xdg-shell is because the new bus is inspired by the Wayland protocol. For all of the incessant bitching about Wayland online, Wayland protocols are only about 1000x nicer to use than dbus. You can actually do decent code generation for it without having to have like 5 competing ways to extend the XML to add basic things like struct member annotations (and then have things like Qt's own DBus code generator unable to actually handle any real DBus service definitions. Try throwing the systemd one at it, doesn't fucking work. The code doesn't even compile.)
> determine the PID of the process that you are talking to and use pidfd to validate where it is coming from.
The pidfd_open() man page doesn't list many things that can be done with a pidfd. What sort of validation do you have in mind?
I would love to have a reasonably snoop-proof secret storage service whose security model works with normal programs (as opposed to requiring Flatpaks or the like).
My reasoning behind the pidfd thing would just be as a way to try to avoid race conditions, though on second thought maybe it's not needed. I think you can take your pick on how exactly to validate the executable. My thought was to go (using /proc/.../exe) check that the file is root owned (and in a root owned directory structure) and then use its absolute path as a key. Seems like it would be a decent start that would get you somewhere on any OS.
I think it would also be feasible to add code signatures if we wanted to, though this would add additional challenges. As I noted elsewhere any scheme that wants to provide a true security boundary here would need to deal with potential bypasses like passing LD_PRELOAD. Still, I think that it has to be taken one step at a time.
> With UNIX domain sockets it is absolutely possible to determine the PID of the process that you are talking to and use pidfd to validate where it is coming from.
Validate what? You're just moving the responsibility to whatever answer you give here. If you say "validate the exec name is firefox-bin" then the next person who comes in will say "I hate $your_new_fangled_ipc, you can make it dump all your secrets by renaming your exec to firefox-bin". (This is just an example).
> And how exactly does the app container service store something encrypted securely on disk? That's literally the point of a secrets service on a modern desktop.
The more I think of it, the less sense this makes. If you already have a system where applications cannot read each other's data, what is the point of secret service? What is the security advantage?
If you want to encrypt with TPM, fingerprint, or anything else, that's encryption, which is separate from storage (you can encrypt the password with say a PCR but the application gets to store the encrypted password in any way they want).
Password encryption in the desktop keyrings are for the situation for when every application can read each other's data files easily (again, as in the desktop). In which case, it may make sense to use encryption so that such data is not (trivially) accessible from any other application (otherwise https://developer.pidgin.im/wiki/PlainTextPasswords applies) .
If your applications are already running sandboxed, a keyring sounds to me like useless complexity? Just make each application store its data into its sandbox. What's the threat vector here, that super-user-that-can-escape-sandbox can read into the sandboxes and extract the password?
> You can actually do decent code generation for it without having to have like 5 competing ways to extend the XML to add basic things like struct member annotations (and then have things like Qt's own DBus code generator unable to actually handle any real DBus service definitions. Try throwing the systemd one at it, doesn't fucking work. The code doesn't even compile.)
Yes sure, another problem resulting from the lack of standarization. But my point was -- standarize (write a spec), instead of adding more to the problem by creating yet another competing standard which will obviously NOT solve the problem of lack of standarization.
> Validate what? You're just moving the responsibility to whatever answer you give here. If you say "validate the exec name is firefox-bin" then the next person who comes in will say "I hate $your_new_fangled_ipc, you can make it dump all your secrets by renaming your exec to firefox-bin". (This is just an example).
I'm genuinely kind of surprised people are tripping up on this. Obviously, what you validate is up to you, but you can. Why stick to just the base name? Why not the absolute path? Bonus points for ensuring it's a root owned file in root owned paths. You could special case Flatpak, or specific mount points, or go crazy and add signatures to binaries if you want. The policy would obviously vary strongly depending on the system, but if you were dealing with a secure booted system with dm-verity, or something similar, well then this mechanism should be fairly watertight. It's not really the end of the world if there are systems with different security characteristics here.
You can really get creative.
(It is worth noting, though, that this could be bypassed various ways trivially, like with LD_PRELOAD, so to be a true security boundary it would need more thought. Still, this could definitely be made improved numerous ways.)
> The more I think of it, the less sense this makes. If you already have a system where applications cannot read each other's data, what is the point of secret service? What is the security advantage?
Well, the obvious initial benefit is the same thing that DPAPI has had for ages, which is that it's encrypted on-disk. Of course that's good because it minimizes the number of components that will see the raw secret and ensures that even other privileged processes can't just read user secrets. Defense in depth suggests that it is a feature, not a problem, if multiple security mechanisms overlap. Bonus points if they'd both be sufficient enough to prevent attacks on their own.
An additional case worth considering is when the home folder is stored elsewhere over a network filesystem, as in some more enterprise use cases.
> If you want to encrypt with TPM, fingerprint, or anything else, that's encryption, which is separate from storage (you can encrypt the password with say a PCR but the application gets to store the encrypted password in any way they want).
It would be ill-advised to have each application deal with how to encrypt user data. They can store keymatter in the keyring instead of the data itself if they want to handle storage themselves. (I'm pretty sure this is actually being done in some use cases.)
> Password encryption in the desktop keyrings are for the situation for when every application can read each other's data files easily (again, as in the desktop). In which case, it may make sense to use encryption so that such data is not (trivially) accessible from any other application (otherwise https://developer.pidgin.im/wiki/PlainTextPasswords applies) .
That page exists to explain why they don't bother, but part of that is that there just isn't an option. If there actually was an option, well, it would be different.
> If your applications are already running sandboxed, a keyring sounds to me like useless complexity? Just make each application store its data into its sandbox. What's the threat vector here, that super-user-that-can-escape-sandbox can read into the sandboxes and extract the password?
The threat vector is whatever you want it to be, there are plenty of things this could be useful for. The reality is that Linux desktops do not run all programs under a sandbox and we're not really headed in a direction where we will do that, either. This is probably in part because on Linux most of the programs you run are inherently somewhat vetted by your distribution and considered "trusted" (even if they are subject to MAC like SELinux or AppArmor, like in SuSE) so adding a sandbox feels somewhat superfluous and may be inconvenient (i.e. file access in Bottles is a good example.) But, even in a world where all desktop apps are running in bubblewrap, it's still nice to have extra layers of defense that compliment each other. And even if something or someone does manage to access your decrypted home folder data, it's nice if the most sensitive bits are protected.
> Yes sure, another problem resulting from the lack of standarization. But my point was -- standarize (write a spec), instead of adding more to the problem by creating yet another competing standard which will obviously NOT solve the problem of lack of standarization.
The reason why people don't bother doing this (in my estimation) is because DBus is demoralizing to work on. DBus isn't a mess because of one or a couple of issues, it is a mess because from the ground up, it was and is riddled with many, many shortcomings.
And therein lies the rub: if you would like to have influence in how these problems get solved, you are more than welcome to go try to improve the DBus situation yourself. You don't have to, of course, but if you're not interested in contributing to solving this problem, I don't see why anyone should be all that concerned about your opinion on how it should be fixed.
> I'm genuinely kind of surprised people are tripping up on this. Obviously, what you validate is up to you, but you can. Why stick to just the base name? Why not the absolute path? Bonus points for ensuring it's a root owned file in root owned paths.
Because you do not get it: this is not Android. There is no fixed UIDs. There is no fixed absolute paths. The binaries are not always root-owned. There is no central signing authority (thank god!). You really do not get it: _anything_ you could validate from a PID would be absolutely pointless in desktop Linux.
> You could special case Flatpak, or specific mount points, or go crazy and add signatures to binaries if you want.
Or, if you are assuming Flatpak, you could simply do not allow access to the session bus and instead allow access only to a filtered bus that only allows talking to whichever services Flatpak provides. Which is how Flatpak does it and you sideline the entire problem of having to authenticate clients on the bus, which is a nightmare. The entire process tree descending from original Flatpak session gets access to this bus and only to this bus.
> hat even other privileged processes can't just read user secrets. Defense in depth suggests that it is a feature, not a problem, if multiple security mechanisms overlap. Bonus points if they'd both be sufficient enough to prevent attacks on their own.
I really do not see the point of this. Of course I want privileged processes to be able to see my passwords; this is _my_ desktop.
I do not see why you'd have your "sandboxes apps" store their private data but then have another storage that is "more secure" for whatever your definition of secure is.
You'd just put the data in the "more secure" storage to begin with.
What you're describing is not another layer of security, it is just pointless complication. As I said, the more I think of it, the less reason I see for a secret service which does not really share secrets.
You reach stupid conclusions like having to design a key-value DB server that only returns values to the process that inserted them in the first place, like what TFA is doing. Why? Just why??? Have multiple totally separate, private instances! And you already have one storage for that: the app's private storage. Why do you even need IPC for this?
> It would be ill-advised to have each application deal with how to encrypt user data.
Why? You do not put the reason why not. Every application does this _today_, and no IPC has ever been needed for this (e.g. openSSL is a library, not a service).
> The reality is that Linux desktops do not run all programs under a sandbox and we're not really headed in a direction where we will do that, either.
In which case, my entire remark does not apply and there is some (minor) benefit to a keyring.
> DBus isn't a mess because of one or a couple of issues, it is a mess because from the ground up, it was and is riddled with many, many shortcomings.
This is a circular argument. D-Bus is a mess because it is a mess. Even if I would agree, it is a pointless argument.
> you would like to have influence in how these problems get solved, you are more than welcome to go try to improve the DBus situation yourself. You don't have to, of course, but if you're not interested in contributing to solving this problem, I don't see why anyone should be all that concerned about your opinion on how it should be fixed.
I am answering to a guy that says that D-Bus sucks then proceeds to create an alternative instead of fixing it. I have not only contributed to D-Bus through decades, I am also part of the reason it is used in some commercial deployments outside traditional desktop Linux (or was a decade ago). My opinion is still as important as his, or yours, which is : nothing at all.
Remember when GPT-3 came out and everybody collectively freaked the hell out? That's how I've felt watching the reaction to any of the new model releases lately that make any progress.
I'm honestly not complaining about the model releases, though. Despite their shortcomings, they are extremely useful. I've found Gemini 3 to be an extremely useful learning aid, so as long as I don't blindly trust its output, and if you're trying to learn, you really ought not do that anyways. (Despite what people and benchmarks say, I've already caught some random hallucinations, it still feels like you're likely to run into hallucinations on a regular basis. Not a huge problem, but, you know.)
IMO this is the best approach, but it is worth noting that musl libc is not without its caveats. I'd say for most people it is best to tread carefully and make sure that differences between musl libc and glibc don't cause additional problems for the libraries you are linking to.
There is a decent list of known functional differences on the musl libc wiki:
Overall, though, the vast majority of software works perfectly or near perfectly on musl libc, and that makes this a very compelling option indeed, especially since statically linking glibc is not supported and basically does not work. (And obviously, if you're already using library packages that are packaged for Alpine Linux in the first place, they will likely already have been tested on musl libc, and possibly even patched for better compatibility.)
Sometimes I am baffled at what gets onto the frontpage at HN, reminding us all that the people who vote stories and the people who comment on them are less of an overlapping group than you might think. I can understand the desire to have names that are more descriptive, but to claim we have "lost the plot" while holding up names like "awk" is contradictory at best. It sounds more like this person just had a personal vendetta against cute sounding names, not against the names being uselessly non-descriptive. In my opinion, the way this post is framed at the outset is misleading.
— This comment brought to you via Firefox, which obviously from its name, is a web browser.
> but to claim we have "lost the plot" while holding up names like "awk" is contradictory at best
My argument is that even a name like awk is much more relevant to the people who used this software back then, of course it was not the best way to name it, but at least it held some meaning to it. Unlike modern software, awk and others were not written with the consideration of a wide user-base in mind. Regarding whether we "lost the plot" or not, I believe that we did, because as mentioned, in the 80s there was a current of people who named software conventionally, and up to the 2010s, the names still used to hold some rational even when word-played or combined with cutey names.
> It sounds more like this person just had a personal vendetta against cute sounding names, not against the names being uselessly non-descriptive.
Not at all, I find it quite fun, just unprofessional.
--
Sent by replying to an automated RSS email, via msmtp (light SMTP client, which is unlike firefox, not a consumer product and its name has to do with its function).
> My argument is that even a name like awk is much more relevant to the people who used this software back then, of course it was not the best way to name it, but at least it held some meaning to it. Unlike modern software, awk and others were not written with the consideration of a wide user-base in mind. Regarding whether we "lost the plot" or not, I believe that we did, because as mentioned, in the 80s there was a current of people who named software conventionally, and up to the 2010s, the names still used to hold some rational even when word-played or combined with cutey names.
I don't personally get it. I can see the argument for names that are descriptive, because a descriptive name might be useful. Meanwhile though, a name like awk is only useful if you already happen to know what it stands for, which to me seems a little silly. Relevant? Maybe... But to what end?
> Not at all, I find it quite fun, just unprofessional.
Why do you consider it "unprofessional"? This seems like a cultural thing. For example, in Japan, it seems like it is not unusual to see cute illustrations in otherwise professional contexts. I am not sure there is a standard for professionalism that is actually universal.
Disregarding that, okay, fine: let's say that naming software after irrelevant things is unprofessional. Why should we care?
Software developers have spent at least the past couple decades bucking trends. We went to work at white collar offices wearing khakis and t-shirts, with laptops decked out in stickers. Now I'm not saying this is all software developers, but it is certainly enough that it is a considerably recognizable part of the culture.
Professionalism, in my eyes, is descriptive, not prescriptive. If professional software engineers normally name things with cute nonsense names, then that is professional for our industry.
I can see the usefulness in descriptive names because they serve a purpose, but names that are merely somehow relevant but otherwise don't tell you anything useful seem just as useless as non-sense names, and justifying the distinction with "professionalism" feels odd.
> Sent by replying to an automated RSS email, via msmtp (light SMTP client, which is unlike firefox, not a consumer product and its name has to do with its function).
Note how this also neatly works as a strong argument against descriptive names. RSS? msmtp? We're now drowning in acronyms and initialisms. I don't particularly have anything against these names (I mean, I use msmtp and the name certainly doesn't bother me) but the utility of the name RSS is quite limited and the vast majority of people probably don't really know what it stands for (to my memory it is Really Simple Syndication, but it may as well be just about anything else, since that doesn't help me understand what it is truly useful for.)
But you do hit on an interesting point that probably helps explain to some degree what's going on here: even for small CLI utilities, more often than not programmers doing open source are actually bothering to work on the marketing and deployment of their software. When I was younger a lot of open source was still more decentralized, with many programmers just dropping tarballs periodically and Linux distros (and others) taking care of delivering the software to users in a usable form. Part of trying to deliver a holistic product is having a memorable name.
msmtp may not be developed as a product, but in practice, almost all software is like a product. Someone is a "consumer" of it. (Even if that person is also a producer of it.) People get "sold" on it. (Even if it's free.) How it's marketed definitely depends on the sensibilities of the developers and the target audience but I'd argue almost all software is "marketed" in some form even if it is non-commercial and not packaged like a product. (Even something like GNU's landing pages for things like Coreutils is arguably a very, very light bit of marketing)
The actual truth is software programs that have more care put into marketing are arguably more professional. The professionalism of having a "relevant" name is rather superficial in my eyes, but having concise "marketing" that "sells" your software well to its intended audience and provides good resources for users is professionalism that makes a difference. Likewise, plenty of things delivered more like products do have relevant names! For example, KeePass and SyncThing come to mind immediately.
So whether the next great email server suite is "SMTPMailSuite" or "Stalwart" is mostly immaterial, but I'm not surprised when marketing-concious developers choose memorable names. (Obviously in case of Stalwart, there's a company behind it, so having good marketing is absolutely in their best interest.)
Another downside of a descriptive name is that software evolves over time and a name that is too descriptive could stop being relevant eventually. Off the top of my head it's hard to think of a specific example, but you can see software that evolves this way all the time. (One example on the Linux desktop is how KWin went from being an X11 Window manager to a full-blown display server compositor during the Wayland transition; but that name obviously still works just fine.)
Fine, but then my critique moves over: the article should do a better job of conveying what the argument is and why it matters.
It opens with rms complaining about the names in the emacs ecosystem not being descriptive enough. OK. But the author argues (in these comments) that their argument isn't against names that aren't descriptive, it's just that the name ought to be relevant, and the reason why is because that is more professional.
Now I am paraphrasing so maybe I am not understanding the argument correctly, but I don't think that strengthens the case for this at all. If anything, it begs the question... why? (And I'm not sure rms would particularly buy this argument either, given that he beckons from hacker culture and seems perfectly happy to break social conventions. rms does not hit me as someone who is highly 'professional' in a traditional sense. This is not an indictment.)
I don't believe your comment is just a direct dump out of an LLM's output, mainly because of the minor typo of "acquihires", but as much as I'd love to ignore superficial things and focus on the substance of a post, the LLM smells in this comment are genuinely too hard to ignore. And I don't just mean because there's em-dashes, I do that too. Specifically these patterns stink very strong of LLM fluff:
> leadership credibility isn’t a soft factor—it’s a structural risk.
> The Timeline/The Big Players/The "Pre-Product" Unicorns/The Downstream Impact
If you really just write like this entirely naturally then I feel bad, but unfortunately I think this writing style is just tainted.
This begs the question, and I've genuinely thought this before, of why we don't just strap a battery to a kettle and end this silly debate. If it takes 5 minutes to boil a cup of water in a 1000 watt kettle, that's somewhere around 80Wh... I guess it would be kind of expensive, but couldn't you make a pretty fast kettle with some number of high discharge battery cells?
(Well honestly, I guess the real answer is outside of Internet debates most people probably just don't consider 5 minutes to boil a cup of water to be a problem.)
It would turn an inert device that costs a couple bucks to manufacture and has affectively no usage limit into a bomb that costs a couple hundred bucks (due to lack of economy of scale) and is limited by the battery's rated number of cycles. The battery's proximity to the heat source wouldn't help.
If people are willing to rewire their homes for kettles, I guess a couple hundred bucks isn't that bad.
> limited by the battery's rated number of cycles
Obviously the battery should be replaceable. (It should be in most electronics, really...)
> The battery's proximity to the heat source wouldn't help.
That doesn't seem like a particularly tricky problem to me. The standard kettle already tries as hard as possible to insulate the heat. If you were really worried it'd be possible to put the battery on a separate power brick instead probably.
...
And I guess I could've solved my own problem by googling it. There are tons of battery kettles on the market, including a 1500W one by Cuisinart and a 2200W (apparently?) unit by Makita. The latter is predictably expensive but the Cuisinart is available for around $100 where I live, which is definitely pricey but seems plausible.
The only one I found that was truly battery-powered was the Makita [0]. The $99 Cuisinart I found seems to be a standard electric kettle. Lots of kettles describe themselves as cordless but that does not mean battery-powered; it just means the kettle itself can be removed from a corded base.
I also found a ton of AI-generated link spam pages purporting to be about battery-powered kettles that are all clearly not battery-powered (e.g. [1]). Some of these are 12v powered, but they still contain no batteries. Apparently the adjective cordless confuses AI just like it does people.
Side note: Boiling water takes a lot of energy. You need a big battery; not just a couple of AAs. Any truly battery-powered kettle is going to require a battery at least as big as one for a contractor-grade power tool, and that battery is going to deplete after roughly one boiled pot.
> Obviously the battery should be replaceable. (It should be in most electronics, really...)
This is super wasteful when we can just hook up a heating element to an insulated tank and keep it hot like Quooker [0] does. Assuming the 3L tank, that would mean probably 20 minutes to heat the tank if it's entirely emptied for the US, but that's how long it would take to boil that water with an electric kettle _anyway_. If you want 5l of water for cooking, you cna use your 3L tank and fill it up with the "slightly lukewarm water that keeps coming through the tap", and then put it on the hob _anyway_. In the best case you're boiling 2L of water instead of 5 anyway.
> That doesn't seem like a particularly tricky problem to me. The standard kettle already tries as hard as possible to insulate the heat. If you were really worried it'd be possible to put the battery on a separate power brick instead probably.
Dunno what kettle you're using but no kettle I've ever used has been insulated. They're either plastic, or stainless steel. They do usually have a lid, which helps.
It doesn't have to be insulated like an insulated water bottle or anything, plastic is good enough for this. I have a cheap 120V kettle, nothing special, probably mostly plastic but with some superficial bits of stainless steel. After bringing a cup of water to a boil you can safely touch the base and anywhere on the kettle itself; there's not even an obvious sign of warmth anywhere except for the lid. If you don't believe me, I do have a thermal camera, but I assume this can be reproduced with most kettles, since it's not like mine is anything special.
Also: a hot water tank is just another type of battery. If it's really well insulated, it might work pretty good, but the self-discharge rate is probably still a lot higher than a lithium ion battery. If you aren't using boiling water every day this seems like it would be very wasteful.
I don't see anything terribly wasteful about the concept of putting batteries in a few more things. They're very recyclable, and already extremely abundant. It's not necessary, but neither is pushing several kW through a kettle just to get water to boil a bit faster. So really, that might be worth interrogating first...
> It doesn't have to be insulated like an insulated water bottle or anything, plastic is good enough for this.
Yeah I agree, but I was responding to the point of:
> The standard kettle already tries as hard as possible to insulate the heat
Which isn't true at all. They make a token effort.
> Also: a hot water tank is just another type of battery.
You're technically correct, the worst kind of correct.
> don't see anything terribly wasteful about the concept of putting batteries in a few more things. They're very recyclable, and already extremely abundant. It's not necessary, but neither is pushing several kW through a kettle just to get water to boil a bit faster. So really, that might be worth interrogating first...
It takes ~320 kJ of energy to bring a litre of water from room temp to boiling, no matter what way you spin it. The difference between pushing 1500w or 3kW into the hot plate is "how quickly do you get to boiling", and has basically no bearing on the total amount of energy used to boil the water. Running a 1500w kettle for twice as long will use the same amount of energy, from the same source.
Using consumable li-ion/alkaline batteries to supplement that energy is _terribly_ wasteful - we've been through the "reduce reuse recycle" loop already with waste, lets not do the same thing with rare earth metals to avoid running a single cable to household appliances.
> Which isn't true at all. They make a token effort.
Look, the point was whether or not it would be okay to put batteries on it, not whether it would keep a drink warm for 12 hours. If the base is cool to the touch, I think it will be completely fine for batteries to be near it. If anything, making sure they're safe from shorting is probably a bigger concern.
> You're technically correct, the worst kind of correct.
The point wasn't to be technically correct, it's to point out that you can compare the properties of the two types of batteries like-for-like and realize that for many people interested in a faster kettle the boiling water tank idea might not be great. In America most homes have a water heater and it has to contend with the same sort of problem, only we use hot water multiple times a day every day (and at least in the Midwest, use LNG for heating it a lot of the time, which makes it economical if not particularly environmentally friendly.)
> It takes ~320 kJ of energy to bring a litre of water from room temp to boiling, no matter what way you spin it. The difference between pushing 1500w or 3kW into the hot plate is "how quickly do you get to boiling", and has basically no bearing on the total amount of energy used to boil the water. Running a 1500w kettle for twice as long will use the same amount of energy, from the same source.
Well duh. My very first post in this thread is estimating how much energy is required for a typical kettle to bring a U.S. cup of water to a boil. (Though obviously in reality you have to account for losses.)
My point here is that (a relatively small niche of) people are already doing crazy things like rewiring their houses (in America) to push pretty absurd power into kettles just boil water slightly faster, a time save that literally only even matters if you sit there and wait idly while the water heats up. The problem I have isn't that higher wattage kettles are somehow bad, it's that all of this time, effort and money for a time save measured in minutes is crazy. And it's the same for strapping batteries to a kettle or for keeping a water tank of boiling water too. I wouldn't bother with any of them, and don't. (But, as I opened this thread with, seeing how crazy people get over this, I do remain surprised at the relatively few battery kettles on the market.)
> Using consumable li-ion/alkaline batteries to supplement that energy is _terribly_ wasteful - we've been through the "reduce reuse recycle" loop already with waste, lets not do the same thing with rare earth metals to avoid running a single cable to household appliances.
I just counted and the room I'm currently standing in has 8 separate high capacity lithium ion batteries. We put batteries in our power tools, laptops, vacuum cleaners, tooth brushes, game controllers, wireless computer peripherals, air compressors, UPS units, the phone someone is currently reading this comment on, air dusters, garden lighting and certainly much more. Almost everything with electronics in it has batteries for something (if you inlcude smaller ones like clock batteries), and more often than ever, high capacity ones no less.
A battery operated kettle will forever be an expensive niche product, and it wouldn't even use that much battery in the first place. The environmental impact of all of those batteries would struggle to get to the level of 100 electric vehicles, and yet we are selling over 10 million of those per year.
Of all of the contrived and silly arguments, this is by far the most contrived and silliest of all of them.
I'm in the midst of a kitchen remodel (in 120V land).
I decided to pull an extra 240V line to the countertop explicitly for a tea kettle, which I have not purchased yet but seem to be available from Amazon UK for ~2x the price of an ordinary US-market kettle.
The most disappointing thing so far is the short list of kettle options that ship from the UK to the US.
Also not sure if I should get a UK receptacle (this would probably offend the bldg inspector, so I might swap post-inspection), or just rewire the kettle itself with a standard US (240V) plug.
FWIW, the extra wire + breaker cost was about $100. I expect to pay another $30 or so for the receptacle or appliance wire, and a bit over $100 for the kettle (and its replacements every few years). Not the least expensive option, but not too bad.
Personally I would just wire some NEMA 240V outlet and then have a separate adapter with a pigtail of that receptacle type and a workbox with the UK receptacle. It's a little unwieldy, but it puts the questionable hackery outside the realm of the building inspection at least.
Whether it's actually safe I though, that I am curious. Obviously the kettle can get the 240V potential it expects, but the neutral is center tapped out of the split phase transformer, right? Not sure how people wire this. (Doesn't the neutral wind up having to be one of the hots instead?)
Hmm, yeah! I hadn't thought much about the differences between UK and US 240VAC service.
In the US, it's 240V 60Hz, split-phase with center-tapped neutral, and an independent ground wire.
In the UK, it's 240V 50Hz, single-phase with independent neutral and ground.
Frequency difference should be within design tolerance. and if my EE memory serves, the phase difference should be acceptable -- just measured from a different zero reference point. The neutral from the wall would be unused, and the ground would be wired as usual.
I'll think this through thoroughly though, I was definitely glossing over those details, so thank you!
Basically my concern is, ordinarily the potential from neutral to ground would be roughly 0V with some slack. In this case, though, the potential from neutral to ground would necessarily be 120V. I have no idea what the implications of that may be, but it seems important.
I think this is right, but I'm not 100%. The kettle should get what it needs, but I'm less certain whether a GFCI or ArcFCI breaker would have opinions that must be accounted for. I'll check with someone more qualified than myself to be sure!
Yes I understand. But what I'm saying is, normally neutral and ground would have roughly 0V potential, but in this case the UK neutral and UK ground will have 120V potential between them, because the US 120V second phase will have 120V potential to ground. (It bears noting that I am just a random guy and not any kind of expert. No formal education or credentials relating to electricity whatsoever.)
I think you're thinking about it on the kettle side, and I was thinking on the breaker side.
I think the kettle side would not care. It may be a ground fault in UK wires, but the kettle has no reason to detect it, and nothing sensitive enough inside to care. If I'm wrong, I'd expect to know shortly after starting the very first use. :)
> Most UK kettles are not 3000W, and most of the ones that are, are junk. Y
They may not be 3 kW, but even the most basic of them are 2200W [0], and 3000W ones are readily available are not much more expensive [1]. They're also not really junk - they're a lump of plastic, a hot plate and a thermistor - the difference between a £8 one and a £80 one is almost all aesthetics.
I watched the video already before this HN thread, being a Technology Connections subscriber, but I genuinely forgot or missed that it discussed that aspect. I'm not surprised, though.
No need to rewire anything - just get a universal plug adapter for NEMA 6-15P (or whatever your kitchen outlet is going to be) from Amazon, plug it onto the UK plug of your kettle, and Bob’s your uncle. (The building inspector doesn’t need to even see your kettle and plug.)
The molded, sealed plug of a UK kettle would fare much better in a wet kitchen environment than an aftermarket plug you'd manually install (moisture can get inside and corrode the terminals and connections).
I agree. If I replace the wire, I'd get an assembly with the correct US molded plug (NEMA 14-30?), and perform the wire replacement inside the kettle itself. Your reason is good, but I'd do it that way for the aesthetics alone. :)
It's probably just the price of batteries. You can definitely do this and you'd need like 8 18650 batteries, which today you can get on amazon for $30 USD. A decade ago it might have cost $200-$300.
Given that premium kettles already sell for about $100, there's definitely room for an ultra premium kettle that boils water laughably fast for $150.
I believe there master plan foresees a future where batteries are more integrated with a house for decentralized grid storage. But the additional consumer advantage is better hardware - i.e cooking time.
That seems a terrible waste of batteries to me. A boiling water tap seems like a better idea to me - electric heater with a pressurised insulated vessel that just dispenses from your tap.
Personally, I am glad to see it. I definitely got vaccinated as soon as I could, but I was also still nervous as there did seem to be some level of reasonable doubt. I would be happy to see more studies confirm what many consider to be obvious.
> before aproving the vaccine, it has to pass a few trials to prove it's effective and safe
In case this comment has you temporarily hallucinate like it did me, I just looked and was able to confirm what I remembered: the vaccines did undergo trials for efficacy and safety before being approved.
I think the part that people doubt is the highly compressed timeline for approval. Hard to anticipate long term effects when something has only been tested for a short period of time. Also during this time the pitch degraded from “you won’t get sick or spread the disease” to “well I still got sick, but it probably would have been worse without the vaccine”. It is actually crazy to think about in retrospect.
> during this time the pitch degraded from “you won’t get sick or spread the disease” to “well I still got sick, but it probably would have been worse without the vaccine”
This line of thinking is so odd to me. Would you have preferred communications to use inaccurate, outdated points for the sake of consistency?
When honest interlocutors learn more about something, they communicate details more accurately. What would you have suggested they do instead? Keep in mind that Covid-19 was as new to them as it was to the rest of the world, and they were also learning about it in real time.
> Hard to anticipate long term effects when something has only been tested for a short period of time
This also applies to Covid infections in immunologically naive people! The two choices were unvaccinated Covid exposure or vaccinated Covid exposure. It's folly to pretend an imagined third option of zero Covid exposure. Comparing to that fake third option does not make any sense.
I'd like accurate communication from the beginning.
>> “you won’t get sick or spread the disease”
I read that many times. It was a totally unrealistic promise, because not even all the other vaccines do that, even after years of research and improvements. (In particular, here is a big trade off in the inyectable vs oral vaccine for polio.)
Who is the highest ranking person that said it? I guess it was not one of the researchers. Perhaps it was a politician that is probably a lawyer and not a medical doctor, or perhaps a tv show host, or perhaps a random internet commenter. Who hallucinated that?
>> “well I still got sick, but it probably would have been worse without the vaccine”
Actually that was what the trials show before the vaccines were approved. I think they had like 50k persons each. The number of deaths was too small to have a statistical significative result in the death toll. It was enough to have a statistical significative reduction of hospitalizations, like a 60% reduction in old style inactivated virus vaccines to 95% in the new style mRNA vaccines. And remember that hospitalization+ventilator is really bad.
> I'd like accurate communication from the beginning.
So you want magic. Got it.
In situations like the one five years ago, perfect understanding of how a new vaccine will interact with a relatively new virus is not going to be available.
Even more, perfect understanding of how good our information is at any given point in time is not always going to be available.
There were definitely some failures to communicate well with the public during that time, but demanding that only definite information be communicated, and then never be contradicted, is asking the impossible.
It also really doesn't help that there are so many people who were (and are) just so scared of everything during that time that any information coming out that wasn't 100% unquestionably positive about any new measure to try to improve things would cause them to shun it forever as too dangerous to try.
> In situations like the one five years ago, perfect understanding of how a new vaccine will interact with a relatively new virus is not going to be available.
Even five years ago, everyone that has a minimal knowledge about vaccines understood it was an unrealistic claim, because many of the vaccines don't provide that level of immunity. If you have some free time to go down the rabit hole, you can try to count them in https://en.wikipedia.org/wiki/Vaccination_policy_of_the_Unit...
I think instead of „magic“ what we should have more of is honesty about uncertainty. The public discourse would be much less toxic if people honestly said that they’re not sure about something and that the policy they advocate might fail to deliver. However such rhetoric is immediately exploited weakness and strongly selected against.
Comparing accurate communication with magic is nonsense.
Both in Europe and the US, the government screwed up badly both mask strategic stockpiles and procurement. Therefore, the official message was that “masks don’t work”.
After they were finally able to procure masks, they magically started working.
That is the real magic, not demanding competence for people whose jobs were literally not fucking this up.
Meanwhile China and South Korea were producing and using masks as was normal.
The second magical part is the gaslighting about the performance of institutions tasked with pandemic preparation and about the exaggerated and incompetent government measures like fining people for going outside, forbidding people from going to work without being vaccinated or mandatorily tested each day, etc.
Vaccine safety issues were consistently downplayed by the media and in internet forums like this one. In the end, the EU-CDC published clear information on the safety of the AstraZeneca vaccine and it was much worse than for mRNA vaccines. One mRNA vaccine was worse than the other.
> and modern multiplayer games with anti-cheat simply do not work through a translation layer, something Valve hopes will change in the future.
Although this is true for most games it is worth noting that it isn't universally true. Usermode anti-cheat does sometimes work verbatim in Wine, and some anti-cheat software has Proton support, though not all developers elect to enable it.
It works in the sense it allows you to run the game; but it does not prevent cheating. Obviously, Window's kernel anti-cheet is also only partially effective anyway, but the point of open-source is to give you control which includes cheating if you want to.
Linux's profiling is just too good; full well documented sources for all libraries and kernel, even the graphics are running through easier to understand translation layers rather than signed blobs.
These things do not prevent cheating at all. They are merely a remote control system that they can send instructions to look for known cheats. Cheating still exists and will always exist in online games.
You can be clever and build a random memory allocator. You can get clever and watch for frozen struct members after a known set operation, what you can’t do is prevent all cheating. There’s device layer, driver layer, MITM, emulation, and even now AI mouse control.
The only thing you can do is watch for it and send the ban hammer. Valve has a wonderful write up about client-side prediction recording so as to verify killcam shots were indeed, kill shots, and not aim bots (but this method is great for seeing those in action as well!)
That's easy to say. But they do prevent some cheating. Don't believe me? Consider the simplest case: No anti-cheat whatsoever. You can just hook into the rendering engine and draw walls at 50% transparency. That's the worst case. Now, we add minimal anti-cheat that convolutes the binary with lots of extra jumps and loops at runtime. Now, someone needs to spend time figuring out the pattern. That effort isn't free. Now, people have to pay for cheats. Guess what? Visa doesn't want to handle payment processing for your hacks & cheats business. So now you're using sketchy payment processors based out of a third-world country. Guess what else? People will create fake hacks & cheats websites that use those same payment processors, and will just take people's money and never deliver the cheats. You get to try to differentiate yourself from literal scammers, how are you going to do that? You can't put the Visa logo on your website. Because you're legit, and you don't want to get sued. Then, the anti-cheat adds heuristic detection for cheat processes. The anti-cheat company BUYS the cheats and reverse-engineers them and improves the heuristics. then the game company makes everyone sign up with a phone number, and permabans that phone number when they're caught cheating. Now some gamers don't want to risk getting banned. Saying that these factors simply don't exist or are insignificant is certainly one of the opinions of all time.
100% agree. This is exactly the kind of big picture thinking that so many people often seem to miss. I did too, when I was young and thought the world was just filled with black and white, good vs evil dichotomies
That is not always possible for genres with fast gameplay like most shooters. It's quite common for player movement to be able to put an enemy in view before the light could've round-tripped from the server.
This is generally the anti-cheat problem. Certain genres have gameplay that cannot be implemented without trusting the client at least some of the time.
This is correct, the correct amount of over-sharing by the server is non-zero, because otherwise you give a HUGE advantage for slight ping differences.
It's even worse, the lowest theoretical latency possible based on speed of light alone is not low enough for the speed of movement in many shooters, if the server hid all immediately invisible information.
What do you do with footsteps and other positional audio? On multiplayer shooter games that's very vital information to let you know an enemy is somewhere behind a wall but cheaters can use it to draw visual markers to pinpoint the enemy player.
I feel like this is the same as saying "seatbelts don't prevent car accident deaths at all", just because people still die in car accidents while wearing seat belts.
Just because something isn't 100% effective doesn't mean it doesn't provide value. There is a LOT less cheating in games with good anti-cheat, and it is much more pleasant to play those games because of it. There is a benefit to making it harder to cheat, even if it doesn't make it impossible.
I don't think that analogy holds because the environment isn't actively in an arms race against seatbelts.
The qualifier "good" for "good anti-cheat" is doing a lot of heavy lifting. What was once good enough is now laughably inadequate. We have followed that thread to its logical conclusion with the introduction of kernel-level anti-cheat. That has proven to be insufficient, unsurprisingly, and, given enough time, the act of bypassing kernel-level anti-cheat will become commoditized just like every other anti-cheat prior.
No. The same way piracy has been diminished in the mainstream by years of lawsuits and jailtime against the loudest most available sources, the strongest anti-cheats have suppressed the easiest and cheapest paths to cheating on AAA games. Piracy hasn't gone away, but the number of people doing it peaked last decade.
Anti-cheat makers doesn't need to eliminate cheating completely, they just need to capture enough cheating (and ban unpredictably) that average people are mostly discouraged. As long as cheat-creators have to scurry around in secrecy and guard their implementations until the implementation is caught, the "good" cheats will never be a commodity on mainstream well-funded games with good anti-cheat.
Cheat-creators have to do the hard hacking and put their livelihoods on the line, they make kids pay up for that.
> the environment isn't actively in an arms race against seatbelts.
I would beg to differ. In the US at least, there does seem to be a hidden arms race between safety features and the environment (in the form of car size growth)
I don't know why you brought up VAC as an example. It is a horrible AC, so bad so that an entire service (FaceIT) was built to capitalize on that.
VAC is still a laughing joke in CS2, literally unplayable when you reached 15k+. Riot Vanguard is extremely invasive, but it's leaps and bounds a head of VAC.
And Valve's banning waves long after the fact doesn't improve the players experience at all. CS2 is F2P, alts are easy to get, cheating happens in alost every single high-ranked game, players experience is shit.
That sounds like it does prevent cheating? But maybe doesn’t prevent ALL cheats. Or do you mean they work so poorly that it doesn’t make any difference at all?
It makes cheating harder and the timeline to a cheat product gets longer than the iteration speed of anticheat. Kind of like fancy locks don't prevent break ins, just take longer to pick and require more specialised tools.
They are wrong, though. Locks also stop people who would happily commit an opportunistic theft but who lack the necessary tools or skills, people who would trespass if they could retain some plausible deniability ("oops, I didn't see the signs" vs. "oops, I didn't realise I wasn't supposed to cut that padlock"), and so on.
The honest people are a larger group than the dishonest people.
And being real, the zero-day cheats are closely guarded and trickled out and sold for high prices as other cheats get found out, so for AAA games, the good cheats are priced out of comfort zone and anyone who attempts the lazy/cheap cheats is banned pretty quickly. A significant portion of the dishonest becomes honest through laziness or self-preservation. Only a select few are truly committed to dishonesty enough to put money and their accounts on the line.
Same way there are fewer murderers and thieves than there are non-murderers and non-thieves (at least in western countries).
I mean it works by someone saying look for DotaCheat4.exe and it searches for it. That’s basically it. Also if your engine has the ability to be hooked into (ahem, gta) it will detect that a process has been attached. It may do some memory scanning if they implemented the allocator from the sdk. What I’m saying is, it’s a crap shoot out there whether the devs did or not. Executives use it as a blanket as to not get sued. “We have anti-cheat”. They can claim it was “circumvented” or whatever. They are all garbage. BattleEye, EasyAntiCheat, Vanguard. If you don’t know, here LL giving a run down.
Cheating still exists and will always exist in online games.
Sure, but you still have to make a serious attempt or the experience will be terrible for any non-cheaters. Or you just make your game bad enough that no one cares. That's an option too.
Other options exist but it’s not an option for these real-time games like FPS’s. I get it.
If you don’t need real-time packets and can deal with the old school architecture of pulses, there’s things you can do on the network to ensure security.
You do this too on real-time UDP it’s just a bit trickier. Prediction and analysis pattern discovery is really the only options thus far.
But I could be blowing smoke and know nothing about the layers of kernel integration these malware have developed.
> But I could be blowing smoke and know nothing about the layers of kernel integration these malware have developed.
Kernel level? The SOTA cheats use custom hardware that uses DMA to spy on the game state. There are now also purely external cheating devices that use video capture and mouse emulation to fully simulate a human.
Ok, they prevent known cheats that the company has found online behind some subscription site run in the basement in Jersey. True. They do raise the bar, but they aren’t the barrier.
Anti-cheat is a misnomer; it's much more about detecting cheats more than it is preventing them. For people who are familiar with how modern anti-cheat systems work, actually cheating is really the easy part; trying to remain undetected is the challenge.
Because of that, usermode anti-cheat is definitely far from useless in Wine; it can still function insofar as it tries to monitor the process space of the game itself. It can't really do a ton to ensure the integrity of Wine directly, but usermode anti-cheat running on Windows can't do much to ensure the integrity of Windows directly either, without going the route of requiring attestation. In fact, for the latest anti-cheat software I've ever attempted to mess with, which to be fair was circa 2016, it is still possible to work around anti-cheat mechanisms by detouring the Windows API calls themselves, to the extent that you can. (If you be somewhat clever it can be pretty useful, and has the bonus of being much harder to detect obviously.)
The limitation is obviously that inside Wine you can't see most Linux resources directly using the same APIs, so you can't go and try to find cheat software directly. But let's be honest, that approach isn't really terribly relevant anymore since it is a horribly fragile and limited way to detect cheats.
For more invasive anti-cheat software, well. We'll see. But just because Windows is closed source hasn't stopped people from patching Windows itself or writing their own kernel drivers. If that really was a significant barrier, Secure Boot and TPM-based attestation wouldn't be on the radar for anti-cheat vendors. Valve however doesn't seem keen to support this approach at all on its hardware, and if that forces anti-cheat vendors to go another way it is probably all the better. I think the secure boot approach has a limited shelf life anyways.
I remember reading that Microsoft is trying to crack down on kernel level anti-cheats. Just like anti-virus, they mess with the operating system on a deep level, redirecting/intercepting API calls, sometimes on undocumented and unstable internal APIs.
Not only does this present a huge security risk, it can break existing software and the OS itself. These anti-cheats tend not to be written by people intimately familiar with Windows kernel development, and they cause regressions in existing software which the users then blame on Windows.
That's why Microsoft did Windows Defender and tried to kill off 3rd party anti-virus.
Apple has gone a similar way with effectively killing kernel extensions for the same reasons. In theory all the kernel extensions use cases have been replaced with "System Extensions" but of course not the same.
The basic explanation is that it prevents binaries that are not signed by default from being loaded during the boot process. It only restricts the booting process in the uefi stage. If an executable has been modified, then it will not load due to secure boot. Technically there is nothing stopping you from modifying say winload.efi and signing it with your own key then adding that key to your bios keystore so that it will pass secure boot checks and still use secure boot.
I think the biggest thing is that the anticheat devs are using Microsoft's CA to check if your efi executable was signed by Microsoft. If that was the case then its all good and you are allowed to play the game you paid money for.
I haven't tested a self-signed secure boot for battlefield 6, I know some games literally do not care if you signed your own stuff, only if secure boot is actually enabled
edit: Someone else confirmed they require TPM to be enabled too meaning yeah, they are using remote attestation to verify the validity of the signed binary
Disclaimer: This is only an educated guess based upon public info. Also, it's impossible to make something truly unspoofable, but it isn't that hard to raise the bar for spoofing pretty high.
There are two additional concepts built upon the TPM and Secure Boot that matter here, known as Trusted Boot [1,2] and Remote Attestation [2].
Importantly, every TPM has an Endorsement Key (EK) built into it, which is really an asymmetric keypair, and the private key cannot be extracted through any normal means. The EK is accompanied by a certificate, which is signed by the hardware manufacturer and identifies the TPM model. The major manufacturers publish their certificate authorities [3].
So you can get the TPM to digitally sign a difficult-to-forge, time-stamped statement using its EK. Providing this statement along with the TPM's EK certificate on demand attests to a remote party that the system currently has a valid TPM and that the boot process wasn't tampered with.
Common spoofing techniques get defeated in various ways:
- Stale attestations will fail a simple timestamp check
- Forged attestations will have invalid signatures
- A fake TPM will not have a valid EK certificate, or its EK certificate will be self-signed, or its EK certificate will not have a widely recognized issuer
- Trusted Boot will generally expose the presence of obvious defeat mechanisms like virtualization and unsigned drivers
- DMA attacks can be thwarted by an IOMMU, the existence/lack of which can be exposed through Trusted Boot data as well
- If someone manages to extract an EK but shares it online, it will be obvious when it gets reused by multiple users
- If someone finds a vulnerability in a TPM model and shares it online, the model can be blacklisted
Even so, I can still think of an avenue of attack, which is to proxy RA requests to a different, uncompromised system's TPM. The tricky parts are figuring out how to intercept these requests on the compromised system, how to obtain them from the uncompromised system without running any suspicious software, and knowing what other details to spoof that might be obtained through other means but which would contradict the TPM's statement.
Perhaps, I have yet to experience anything like what the older games had though.
It might just be the game too - I do think the auto aim is a bit high because I feel like I make aimbot like shots from time to time. And depending on the mode BF6 _wall hacks for you_ if there are players in an area outside of where they are supposed to be defending. I was pretty surprised to see a little red floating person overlay behind a wall.
Anticheat devs could REALLY benefit by having some data scientists involved.
Any player responding to ingame events (enemy appeared) with sub 80ms reaction times consistently should be an automatic ban.
Is it ever? No.
Given good enough data a good team of data scientists would be able to make a great set of rules using statistical analysis that effectively ban anyone playing at a level beyond human.
In the chess of fps that is cs, even a pro will make the wrong read based on their teams limited info of the game state. A random wallhacker making perfect reads with limited info over several matches IS flaggable...if you can capture and process the data and compare it to (mostly) legitimate player data.
> Any player responding to ingame events (enemy appeared) with sub 80ms reaction times consistently should be an automatic ban.
It's really much more nuanced than that. Counter-Strike 2 has already implemented this type of feature, and it immediately got some clear false positives. There are many situations where high level players play in a predictive, rather than reactive, manner. Pre-firing is a common strategy that will always look indistinguishable from an inhuman reaction time. So is tap-firing at an angle that you anticipate a an opponent may peek you from.
You mustve missed the part where i spoke of consistency?
Ive played at the pro level. Nobody prefires with perfect robotic consistency.
I dont care if it takes 50 matches of data for the statistical model to call it inhuman.
Valve has enough data that they could easily make the threshold for a ban something like '10x more consistent at pre-firing than any pro has ever been' with a high confidence borne over many engagements in many matches.
Then youve immediately made the cheater worse than the best players to blend in with them. Mission accomplished, cheater nerfed significantly. You wont even know theyre doing it.
Good! Thats a much better situation than the one we are in. Thrre is a limit to how much damage a good legit player can do to the average player experience. Just the psychological damage a blatant or rage hacker does is immense. Kills your motive to play, makes you question others, etc.
There's well analyzed video of a pro player streaming who got temporarily banned for something like this. It might not even have been pre-fire, but post-fire at a different enemy retreating at the same position
Valve need to tweak the model so that it requires a higher confidence level before a ban, and to reduce false positives in their data capture methods. This is a mistake but doesnt kill the idea.
We used to track various timings in some of our games to detect cheating. Cheaters find out and change their cheat engines to perform within plausible human reactions. Which is a benefit - now the cheating isn't obvious to everyone, but it still happens. I don't know if you could sprinkle data scientist dust on the problem and come up with a viable cross-game solution though.
Good! Thats actually one of the goals. Reduce the advantage cheaters can gain to within human bounds. They can cheat to feel like a good player, but not a god.
Or perhaps the 0ms-80ms distribution of mouse movement matches the >80ms mouse movement distribution within some bounds. I'm thinking KL divergence between the two.
The Kolmogorov-Smirnov Test for two-dimensional data?
There's a lot of interesting possible approaches that can be tuned for arbitrary sensitivity and specificity.
Throwing in ML jargon and going straight to modelling before understanding the problem reduces your credibility as a data scientist in front of engineers and stakeholders.
As always, one of the most difficult parts is getting good features and data. In this case one difficulty is measuring and defining the reaction time to begin with.
In Counter Strike you rely on footsteps to guess if someone is around the corner and start shooting when they come close. For far away targets, lots of people camp at specifc spots and often shoot without directly sighting someone if they anticipate someone crossing - the hit rate may be low but it's a low cost thing to do. Then you have people not hiding too well and showing a toe. Or someone pinpointing the position of an enemy based on information from another player. So the question is, what is the starting point for you to measure the reaction?
Now let's say you successfully measured the reaction time and applied a threshold of 80ms. Bot runners will adapt and sandbag their reaction time, or introduce motions to make it harder to measure mouse movements, and the value of your model now is less than the electricity needed to run it.
So with your proposal to solve the reaction time problem with KL divergence. Congratulations, you just solved a trivial statistics problem to create very little business value.
Appreciate the feedback, you're right - armchair speculation is different than actual data science. Without actual data to examine, we're left with the latter and that can still be a fun exercise even if it doesn't solve any business problem. We're here to chitchat and converse after all.
Yeah, apologies if it was too harsh. I was more irked by someone else who kept trying to asset it's an easy problem, and confused it with your display of raw curiosity, which is something I don't wish to discourage.
Cheaters don't have to play like normal people to avoid detection. They just have to make it expensive to police them. For example, the game developer may be afraid of a even a 10% false positive ban rate, and as a result won't ban anyone except perhaps a small number of clean-cut cases.
Yes, the current status is that cheaters can play distingushable from humans. But my point was more that, if we create a system that allows cheating that still is equivalent to a good player, then it just feels like playing against good players. Which, to me, feels like it'd be mission accomplished.
This is one of the cases where ML methods seem appropriate.
Most cheaters are playing well outside of human limits and doing huge amounts of damage to the legitimate player experience. A 10% safety margin beyond human play sounds reasonable. A world where cheaters can only play 10% better than humans is a far better world than the one we are in at the moment.
Strong disagree. I play a lot of casual CS, and the number of extremely poor / new / young players using rudimentary cheats and performing far below average is huge. Most players don't watchfully spectate the bottom fraggers in the lobby, but if you do, the number of them brazenly using wallhacks is quite high.
These players aren't using aimbot / triggerbot (or if they are, they don't understand the gunplay and try to shoot while running), and may not even understand wall penetration, so their reaction times wouldn't look abnormal at all. From the data, they would likely have below average reaction times still.
Even though they are not performing well, their presence still massively alters the gameplay for legitimate players. For one, lurking becomes a pointless endeavor. You're better off rushing wildly than attempting any sort of stealth.
Why not? As long as there are players, some of them also want to be admins. You maybe mean commercial administration is not scalable for games with a fixed price? Sure, but give the option to the community to manage (rent) servers on their own and they will solve it themself.
Its not even an option in most titles and the industry as a whole has moved away from such hosting models, partly to ensure players receive a consistent and fair experience. Community servers were rife with admin abuse.
Its okay if you havent played an online game in 20 years mate
Like another commentor mentioned, I think that only works for a specific cheat(engine) - as long as they don't adjust (and randomize more for example). If it could be solved with some statistics, I think it would have been done already. I ain't a statistician though, but if you feel confident, I think there is quite some money in it, if you find a real world solution.
>Can you define what "reacting" means exactly in a shooter
A human can't really, which is why you need to bring in ML. Feed it enough game states of legit players vs known cheaters, and it will be able to find patterns.
Yeah, that's why you need a data scientist or two to figure that stuff out. Its a solvable problem, but you're not going to get solutions instantly for free in the reply section of HN.
But in the reply section you can read about that it has been tried in reality, with not so much success as in theory. But if you see a working solution, then you don't need to tell me, but can market it yourself.
/r/rust, the subreddit for the Rust language, regularly (every 1-2 days at most) gets posts meant for /r/playrust, the subreddit for the Rust game. I genuinely don't know how people manage to get as far as posting without noticing where they are.
It’s probably because the “create a Reddit post” form doesn’t require you to even visit the subreddit you are posting to. It DOES show you the rules/sidebar of the subreddit you are about to post to (for /r/rust it includes a link to /r/playrust for the gamers) but apparently many aren’t seeing that.
"Banner blindness" applies to the rules/sidebar. The user sees it, notices it's not what they're looking to interact with, and ignores it. The same thing happens for modal dialogues where the user will click whatever button makes the message go away without bothering to read the message, only the button text.
Likely a case where Google figured out which one you meant through the telemetry of what you clicked on and how you refined your search, now that personalization is automatic. In my case, I get four regular results, which are the financial standard, the programming language, the wikipedia page for the programming language, and an ISP; then I get a "top stories" block that is all about the singer.
More tricky for the sibling comment with Rust, where either one could be valid.
It's telling that Valve uses a user space anti-cheat (VAC) for Counter-Strike 2, but the competitive community overwhelmingly rejects that and ops to use a third-party Windows-only kernel mode anti-cheat (FACEIT).
I think even the "Major" tournaments that are officially sanctioned and sponsored by Valve, though organized by third parties, usually run on FACEIT or similar.
Anti cheats are as much a marketing ploy as they're actual anti cheats. People believe everyone is cheating so it must be true. People believe nobody bypasses the FACEIT anti cheat so it must be true. Neither of those are correct.
Riot revels in this by marketing their anti cheat, but there are always going to be cheaters. And sooner or later we will have vulnerabilities in their kernel spyware. I much rather face a few cheaters here and there (which is not as common as people make it to be on high trust factor).
You think tournament organizers or pro players know the first thing about anti cheats? They buy the marketing just like everybody else.
The marketing works because online games get destroyed by cheats. Losing in online games can be full of “feel bad” moments, even without cheaters (network issues, cheesy tactics, balance issues). To think that your opponent won because they outright cheated just makes you wanna quit.
I’ve seen so many players saying “look you can own my entire pc just please eliminate the cheating.”
It would be great to see more of a web of trust thing instead of invasive anti cheat. That would make it harder for people to get into the games in the first place though so I don’t know if developers would really want to go that way.
To me the "web of trust" element frankly seems like the only viable solution. And in fact, its almost here already: https://playsafeid.com/
I predict that hacker news in particular will dislike using facial recognition technology to allow for permanent ban-hammers, but frankly this neatly solves 95% of the problem in a simple, intuitive way. Frankly, the approach has the capacity to revitalize entire genres, and theres lots of cool stuff you could potentially implement when you can guarantee that one account = one person.
The marketing works because of what I said: people are dumb.
Anyone that's not dumb will know (maybe after the heat of the moment) why they lost, but the vast majority of people will blame anything they can instead. Teammates, lag, the developers, etc. Cheating is merely one of these excuses.
> I’ve seen so many players saying “look you can own my entire pc just please eliminate the cheating.”
This entire idea is so dumb it makes my head hurt. You can't eliminate bad actors no matter how hard you try. It's impossible in the real world.
All these "if only we could prevent X with more surveillance/control" ideas go up in flames as soon as reality hits. Even if a single person bypasses it, we can question everything. Then all we're left with are these surveillance systems that are then converted into pure data exfiltration to sell it all to the highest bidder (assuming they weren't doing this already).
I applaud Valve for not going down the easy route of creating spyware and selling it as "protection".
Cheating is a very real problem in most competitive matchmade video games. The fact that you think that this is an "excuse" conclusively indicates that you don't actually have experience with them and that you have absolutely no idea what you're talking about.
> This entire idea is so dumb it makes my head hurt. You can't eliminate bad actors no matter how hard you try. It's impossible in the real world. ... Even if a single person bypasses it, we can question everything.
This is clinically insane. 99.999% of people, including most of those two-sigma below the mean in terms of intelligence, correctly recognize how stupid of an argument this is, and that eliminating the majority of crime/cheating is absolutely a huge victory that is worth sacrificing for.
Think about that - some of the dumbest people in our society realize that the argument "if we can't stop every criminal/cheater, then there's no point in trying" is bad. What does that make you?
(it's also abundantly clear that you have zero experience in finance or security, either, because anyone competent in those fields can tell you exactly what it means to impose costs on an adversary and why your argument is factually incorrect)
so ironic that a microsoft game supports it compared to others who should be incentivised to support players on steam deck. especially when they support platforms like nintendo switch despite vastly different architecture and capabilities.
Arc Raiders is a great example of a modern and popular multiplayer game that works with proton. I haven't heard about it having a problem with cheating.
Marvel Rivals, Age of Empires 2 DE, Path of Exile 1/2, Last Epoch, Fall Guys are other such examples. In fact, Marvel Rivals even explicitly mentioned Bazzite in one of their changelogs! I can't recall an instance when a major game name-dropped a (relatively) minor Linux distro like that.
I think a big portion of that is the rather poorly made anti-tamper solution they are using called 'Theia' most cheat developers are too unintelligent to correctly reverse engineer this kind of binary obfuscation
Valve is the only company I'd let inject anti-cheat software directly into my veins if it meant I could play CS and be sure others were not cheating haha.
As a former cheat developer, I think it is impossible since it is digging into some specific stuff of Windows. For example, some anti-cheat uses PsSetCreateThreadNotifyRoutine and PsSetCreateThreadNotifyRoutine to strip process handle permission, and those thing can't be well emulated, there is simply nothing in the Linux kernel nor in the Wine server to facilitate those yet. What about having a database of games and anticheat that does that, and what if the anticheat also have a whitelist for some apps to "inject" itself into the game process? Those are also needed to be handled and dealt with.
Plus, there are some really simple side channel exploits that your whitelisted app have vulns that you can grab a full-access handle to your anticheat protected game, rendering those kernel level protection useless, despite it also means external cheat and not full blown internal cheat, since interal cheat carrys way more risk, but also way more rewardings, such as fine-level game modification, or even that some 0days are found on the game network stack so maybe there is a buffer overflow or double-free, making sending malicious payload to other players and doing RCEs possible. (It is still possible to do internal cheat injection from external cheat, using techniques such as manual mapping/reflective DLL injecction, that effectively replicates PE loading mechanism, and then you hijack some execution routine at some point to call your injected-allocated code, either through creating a new thread, hijacking existing thread context, APC callback hijack or even exception vector register hijacking, and in general, hijack any kinds of control flow, but anticheat software actively look for those "illegal" stuff in memory and triggers red flag and bans you immediately)
From what I've seen over the years, the biggest problem for anticheat in Linux is that there is too much liberty and freedom, but the anticheat/antivirus is an antithesis to liberty and freedom. This is because anticheat wants to use strong protection mechanism borrowed from antivirus technique to provide a fair gaming experience, at the cost of lowering framerates and increasing processing power, and sometimes BSOD.
And I know it is very cliche at this point, but I always love to quote Benjamin Franklin: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety". I therefore only keep Windows to play games lately, and switched to a new laptop, installed CachyOS on it, and transfered all my development stuff over to the laptop. You can basically say I have my main PC at home as a more "free" xbox.
Speaking of xbox, they have even more strict control over the games, that one of the anticheat technique, HVCI (hypervisor-protected code integrity) or VBS, is straight out of the tech from xbox, that it uses Hyper-V to isolate game process and main OS, making xbox impossible to jailbreak. In Windows it prevents some degree of DMA attack by leveragng IOMMU and encrypting the memory content beforehand to makd sure it is not visible to external devices over the PCIe bus.
That said, in other words, it is ultimately all about the tradeoff between freedom and control.
I think if Linux gaming becomes popular someone may come up with a solution where you run a native linux kernel-mode anticheat. That somehow connects to the wine-hosted game.
I'm not sure how I feel about that, but it's what I think will happen.
companies will go where the money is. If Valve enables, say EA, to have their yearly franchise and in-game-stores on mobile devices, they will find a way.
I honestly don't know why so many people say that anti-cheat with Proton or SteamMachines won't work. SteamOS is an immutable Linux - especially with their own SteamMachine they can enable SecureBoot and attestation that you are using the SteamOS verbatim efi boot file, kernel, and corret system fs image - all signed by Valve. Just as Battlefield 6 does on windows (relying on SecureBoot). That would still allow you to install other OSes on your SteamDeck/SteamMachine, but it would fail the anticheat attestation. I personally see the push in hardware from Valve particular so that they can support anti-cheat on linux.
It's not even open source in the first place if any kind of unscrupulous use of it is disallowed, as that would be discriminating on use case. It ultimately doesn't matter much to the open source community, as it effectively can't be used in otherwise open source projects, as the result wouldn't be open source and it is going to be license-incompatible with many projects anyways.
That said, I find it preposterous to accept this notion even ignoring that point. You shouldn't have to take it on faith that what you're doing is allowed by the copyright license—the whole point of the license is to make that clear. Everybody always shrugs off the risk of a malicious owner until Oracle acquires their dependencies.
reply