Hacker Newsnew | past | comments | ask | show | jobs | submit | hebejebelus's commentslogin

It's quite hard to know if the DAC is actually decent quality though. I've bought two from Amazon (admittedly at the low price point) and both of them have line noise - one of them even has a ground loop buzz, which surprised me, since it's powered by USB-C. I'm unconvinced that any of the higher price points (that are still within my budget) aren't just these cheapo ones in slightly fancier cases.

My old TV had real analogue out for speakers and it really did sound a lot better than what I've been getting through TOSLink and this cheapo DAC. Same Hi-Fi and speakers. I'm sure the problem could be solved with a more expensive DAC, but which one? How could I know?

I find this is one of those things where it's quite hard for the uninitiated to see through the cloud of 'audiophiles' saying that you must buy gold cables or your audio will sound like garbage, and still getting decent quality audio.


As a broad concept: Cheap, high-function DACs definitely exist.

For instance: Apple-produced headphone adapters for iPhones are inexpensive -- like $10 or so. And inside of that diminutive adapter is buried a whole USB DAC, with a headphone amplifier. It's so seamless and low-cost that some folks think I'm crazy when I tell them this, but they measure great and also work great. (They work great as DAC/headphone amps for PCs, too. Android, not so much: It works, but there's a bug [that will probably never be fixed] relating to volume control and low output level.)

Anyway, I identify as an audiophile. I've spent several decades playing with this stuff, sometimes well beyond the level of "serious hobby." I've made some money doing audio stuff. I've also spent some time in the studio, and in front of the stage, making things sound good.

And I'm practical. I promise you that I can wire up a high-end stereo system with metal coathangers that will sound indistinguishable from something connected using only solid silver Kynar-insulated wire that is jacketed in cloth woven by Benedictine monks from the first cutting of wool from a single virgin albino Bolivian alpaca (for "purity") that has been dyed and imbued with post-civet Kopi Luwak (for "balance"), in any correctly-controlled blind A/B/X test.

Trust your senses. And by that, I mean: If it looks like bullshit, and it smells like bullshit, and it tastes like bullshit, then spit it the fuck out. :)

For sorting inexpensive products that actually work from those that have practical issues like hum or noise: That's what Amazon reviews are for.

But there are some very nice things for sale that aren't stupid-expensive boutique items. Schiit, for example, builds their own designs in the US and charges Buick prices for them instead of Bugatti prices -- and their website shows photos of what the devices look like on the inside, too.

...anyway, ground loops are usually real. Professionally, I've encountered them most-often in residential environments when converting a customer's television into any manner of home theater. 100% of the time I've discovered this, it was because some bonehead grounded the cable TV wire or the satellite dish to some ground that was separate and distinct from the home's electrical ground -- which should never, ever happen.

The loop would show up when we introduced the first bit of gear that had a 3-prong plug into their mix of things that previously only had 2-prong plugs. Adding the AV receiver, the subwoofer, or whatever tied the electrical ground to the stupid ground and current would flow between them, producing noise.

(And USB-C is ground-referenced, so keep that in mind. Toslink, though? That's fiber optic and thus also galvanically isolated.)


I think it's an extreme example of not-invented-here syndrome. In many ways that leads to interesting novelties, and in others it leads to not having undo/redo until the 2010s.

In fairness the problem is not easy! JupyterLab only got document-wide undo/redo in 2021 per https://jupyterlab.readthedocs.io/en/3.1.x/getting_started/c... .

Mathmatica has been around since the 80s, Jupyter how long? Far leas.

"Fun" fact: This is partially because Wolfram owned a patent on the "code notebook" invention. Once it expired, Jupyter was created.

Just before I stopped using Mathematica they came out with that headless kernel, and I had wondered if you could spin it up on a Kubernetes cluster or something.

I do notice that they have an "Application Server" for Kubernetes, which is pretty curious: https://github.com/WolframResearch/WAS-Kubernetes (though not updated in over a year)


One of the more interesting things about WL is that Stephen Wolfram is really a genuine daily user of the software his company makes and he's the final say on what ships and in what form. They used to livestream his meetings reviewing potential new features on Youtube, an interesting watch. It didn't make me want to work there but I did feel like he cared very much. Quite Jobsian, dare I say.

> Quite Jobsian, dare I say.

Good for product: not so good for people.

I am told that he gave a great deal of agency to people he trusted, though.

In my career, I ran into two [brilliant] individuals that had, at one time, worked with Jobs.

They both hated him.


Man, I miss Wolfram Language. Once you've twisted your brain a little to grok its usage, it's such an incredibly high-value tool, especially for exploration and prototyping. I saw it more as a do-anything software tool for researchers rather than as a language aimed at programmers, so I put on a researcher hat and tried to forget everything I knew as a professional programmer, and had a few memorable seasons with it around 2016-2020. I remember calculating precisely which days of the year would cause the sunlight to pass through a window and some glass blocks in an internal wall, creating a beautiful light show indoors. It only took a couple of minutes to get a nice animated visualisation and a calendar.

Nowadays I'd probably just ask Claude to figure it out for me, but pre LLMs, WL was the highest value tool for thought in my toolbox.

(Edit: and they actually offer perpetual licenses!)


The power of the language came from the concise syntax (I liked it more then classical LISPs) with the huge library of Mathematica. When Python is "batteries included", Mathematica is "spaceship included".

If this was open sourced, it had the potential to severely change the software/IT industry. As an expensive proprietary software however, it is deemed to stay a niche product mainly for academia.


> If this was open sourced, it had the potential to severely change the software/IT industry.

As an engineering undergrad I had a similar feeling about Matlab & Mathematica.

Matlab especially had 'tool boxes' that you bought as add-ons to do specific engineering calcs and simulations and it was great, but I almost always found myself recreating things in python just because it felt slightly more sane to handle data and glue code.

Pandas and Matplotlib and SciPy all used via an ipython notebook were almost a replacement.


As discussed on another thread, the outcome is poorly tools glued together, due to lack of roadmap and polish that commercial software usually supports, instead of volunteers coming and going, only caring for their little ich.

I’m not sure about that. I used to use LabView and its various libraries often. The whole thing felt scattered and ossified. I’d take a python standard library any day.

Yet most EE engineers rather use a graphical tool like LabView or Simulink.

Not everyone is keen doing scripting from command line with vi.


I once interned at a lab that used a piece of surely overpriced hardware that integrated with Simulink. You would make a Simulink model, and you’d click something and the computer would (IIRC) compile it to C and upload it to the hardware. On the bright side, you didn’t waste time bikeshedding about how to structure things. On the other hand, actually implementing any sort of nontrivial logic was incredibly unpleasant.

Hahaha yep, so much clicking after One day my finger was actually sore.

Maybe it’s different for those actually working in the profession and n=1 but in my (many) years of studying EE I never used these tools even once.

No not really, depending on the application Cpp or python has been the language of choice in the lab. Labview was used because it was seen as easy to make UIs for operators in production facilities, but even that was a regrettable decision. We ended up rewriting LV business logic in c# and importing it as a lib into a LV front end.

To be fair, "sundry tools poorly glued together" describes CAS and symbolic computation software in general, including Maple or Mathematica. It's surprisingly difficult to put a proper formal foundation (guaranteeing the absence of "wrong" or even outright meaningless results) even on very basic symbolic manipulations.

Commercial software polish is lipstick on a pig. A pig that will never be anything else and will eventually die as a pig.

Ugly os software at least has potential to grow internally. Long lived commercial software is a totting carcass with fresh coat of paint every now and then.


Yet, the Year of XYZ software seldom comes, the usual cheering of tools like Blender, often forgets its origin as commercial product and existing userbase.

Someone has to pay the bills for development effort, and when it based on volunteer work, it is mostly followers and not innovators.


There's nothing wrong with commercial software being the origin. What's a crime is that it can stay commercial. Source code should enter public domain in a decade at most.

> What's a crime is that it can stay commercial. Source code should enter public domain in a decade at most.

In many cases, people are free to write their own implementation. Your claim "Source code should enter public domain in a decade at most." means that every software vendor shall be obliged after some time to hand out their source code, which is something very strong to ask for.

What is the true crime are the laws that in some cases make such an own implementation illegal (software patents, probitions of reverse-engineering, ...).


> every software vendor shall be obliged after some time to hand out their source code,

Obviously. Since software is as much vital to the modern world as water, making people who deal with it disclose implementation details is a very small ask.

Access to the market is not a right but a privilege. If you want to sell things we can demand things of you.


I think commerce between individuals is a right.

Infringing on that should be justified in terms of protecting the rights of those involved, such as ensuring the quality of goods, enforcement of reasonable contract terms and such. We are involved in the process as participants in the market, and that’s the basis of any legitimacy we have to impose any rules in the market. That includes an obligation to fair treatment of other participants.

If someone writes notes, procedures, a diary, software etc for their own use they are under no obligation to publish it, ever. That’s basic privacy protection. Whether an executable was written from scratch in an assembler or is compiled from high level source code isn’t anyone else’s business. It should meet quality standards for commercial transactions and that’s it. There’s no more obligation to publish source than there is to publish design documents, early versions, or unpublished material. That would be an overreaching invasion of privacy.


So restrict the obligation to companies.

On what justification? You just want to take their stuff, because?

People shouldn’t lose their rights to what they own, just because they do so through a company.

I do think reasonable taxation and regulation is justifiable but on the understanding that it is an imposition. There is a give and take when it comes to rights and obligations, but this seems like overreach.


I see, you think actively preventing companies and individuals from interacting freely is the default. And it's a privilege allowed to?

Well I wonder who it is you think has the right to deny others the freedom to cooperate economically by default. Then, allow "privileges" so people can work together.

--

Aside from that moral upside-down world, what you are describing is a steep limit on copyrights, with forced source, i.e. trade secret, reveals.

So you are removing the huge incentives that copyright creates. If software were always trivial to build, or cost very little to build, that would not be problem. In real life, that would devastate software work, and we would all be poorer for it. Companies, individual software developers, and users.


> Obviously. Since software is as much vital to the modern world as water, making people who deal with it disclose implementation details is a very small ask.

The analogy would be ever-so-slightly more accurate if you said "software is as much vital to the modern world as beverages".

It would also be more accurate if all water was free.

Neither of which is the case.


You must design your own hardware too, since you can't get the blueprints of commercial products.

Fortunaley hardware designs are routinely reverse engineered and cloned. Imagine the world where industrial designs were as hard to reverse engineer as clone in practice as software. Global GDP would be 10% of what it is. Largest economies of the world owe lion share of their development to cloned designs.

Not everyone buys into FOSS religion, especially when there are bills to pay, and too many people feeling entitled to leech on work of others and being paid themselves, or companies for that matter.

Worse than lipstick on a pig is lipstick all the way down, with no pork, like the user interfaces coming out of Apple.

> As an expensive proprietary software however

It's $195/year for a personal license. And only $75/year for students. Their licensing model is pretty broad.


Well, that doesn't sound too bad. But this is a high enough barrier for Mathematica to not see wide spread use.

I don't remember what the pricing has been throughout the years. But I do remember that for some of the time I couldn't really afford Mathematica. And the license I wanted was also a bit too expensive to justify for a piece of software that only I would be using within an organization.

Because it is also about enough other people around you not being able to justify the expense. And about companies not wanting to pay a lot of money for licenses so they can lock their computations into an ecosystem that is very small.

Mathematica is, in the computing world, pretty irrelevant. And I'm being generous when I say "pretty": I have never encountered it in any job or even in academia. People know of it. They just don't use it for work.

It would have been nice if the language and the runtime had been open source. But Wolfram didn't want to go in that direction. That's a perfectly fine choice to make. But it does mean that as a language, Mathematica will never be important. Nor will knowing how to program in it be a marketable skill.

(To Stephen Wolfram it really doesn't matter. He obviously makes a good living. I'm not sure I'd bother with the noise and stress coming from open sourcing something)


> And I'm being generous when I say "pretty": I have never encountered it in any job or even in academia. People know of it. They just don't use it for work.

To my knowledge, at least in academia, Wolfram (Mathematica) seems to be used quite a bit by physicists. Also in some areas of mathematics it is used (but many mathematicians seems to prefer Maple). Concerning mathematical research, I want to mention that by now also some open-source (and often more specialized) CASs seem to have become more widespread, such as SageMath, SymPy, Macaulay2, GP/PARI or GAP.


You're right -- the theoretical particle physicists at my faculty were using Mathematica very heavily when I was still in academia and maintained a dedicated compute cluster for it.

They really did not appreciate the debugging experience, but maybe that's improved in 15 years. :)


I've been at a few universities and labs as a postdoc, and a Mathematica license always came either as part of the University or the department. It might not be relevant in some disciplines, but generally I assume it must be used a lot to warrant such broad licensing (it is a tool I use daily as a theoretical physicist).

In Maple sin(x) is "sin(x)", in Mathematica it's "Sin[x]", ewww

The Maple syntax may superficially seem easier but actually leads to more problems in practice. The point of the [ ] is that argument of a function is logically distinct from algebraically grouping terms in an equation. Also, Mathematica is a camel case language since underscore is for pattern recognition, hence the capitalization of function names. Personally, I’ve found every little Mathematica design feature to be incredibly well thought out, logical, and consistently implemented over the whole of the language.

In my opinion, Wolfram/Mathematica is more consistent internally, while Maple is more consistent with the usual mathematical notation.

> while Maple is more consistent with the usual mathematical notation

I can't tell if you're saying that as if it's a good thing, or a bad thing.


It's not about good nor bad, but about the different trade-offs that these two CASs made. What is more important for you is something that you can only answer for yourself.

I actually loved this idea so much that every language I make, I try to do the same. The point of it is that typing ( requires shift, while [ does not. And you have no idea when you have tunnel syndrome, how much it hurts each time you write a (. While it’s ugly, the hand thanks you for it.

> The point of it is that typing ( requires shift, while [ does not.

https://gitlab.freedesktop.org/xkeyboard-config/xkeyboard-co...

Now, I really could've used something like this on macOS…

Karabiner to the rescue https://genesy.github.io/karabiner-complex-rules-generator/#...


As everybody knows ...

I use(d) arch btw

Ironically, on ISO keyboards, [] need an ALT so even more pain

I definitely get the impression that Wolfram builds his tools primarily for himself, and is happy to let other people play with them because that way he gets money to pay for them.

That is not the impression, that is exactly why, And actually that is their strength. Back in the days the whole Apple was there to make software for Jobs and look how awesome that turned out. Wolfram is trying to complete tue work of Leibniz and create a universal calculus. A unifying language for symbolic computation, which is amazing.

While I'm not sure the particular price point is the biggest problem here, the student license pricing doesn't feel seem that great either. The language is hard enough to learn, and most students won't have time to figure out if they want to buy it with a 15-day trial. They'd probably need half a semester at the very least, unless it's a required part of the curriculum. In the rare case where a student is already familiar enough to know they want it, then four years of $75/year is $300... at that point they may as well just pay $390 for a perpetual personal license, so they can at least keep opening their files in the future.

That said, the parent was talking about it being expensive for use in industry. Personal and student licenses aren't relevant there.


It still creates a class of haves and have-nots which prevents forming a community.

Plus you buy a version of it, and then someone else is on another version, and you don't have the same features, and the tiny community is fragmented.


I’d like to use it sporadically, but they charge a lot for people in academia, and for my use case it’s simply not worth it.

I’m using xcas now, it’s working pretty well for my humble needs.


It's also free on raspberry pi

My thought is that licenses were similarly cheap for historical programming tools like Turbo Pascal and Visual Basic. My dad got me Turbo Pascal for my birthday, for $39, after reading about it in the Wall Street Journal.

But it seems like the proprietary languages have all withered, regardless of price. Even $195 for Mathematica is an obvious concession to this trend. I don't ever remember it being that cheap.

I could write an essay on the benefits of free tooling, but enough has already been written. I'll spare you the slop. ;-)


Didn’t they also bundle something for free with Raspberry Pi OS?

Ok fuck it. Bought the year!

There is of course a FOSS rewrite https://mathics.org

Actually, Wolfram Language is based on Tiny Caml. M-expression based though. Lisps are neat and cosy too.

That's exactly the same analogy I used to use, although I said "nuclear reactor included" - spaceship is better, it implies less danger and more expanded horizons!

I've had it installed on my laptop for over two decades, and I use it maybe once a year for actual work. Every time, it feels like cracking a walnut with a 500-ton press.

Mathematica is way, way under appreciated in industry, and even in the sciences.


Interesting. I have always felt I am missing out on not using tools like Mathematica or MatLab. I see some people doing everything using MatLab, including building GUI and DL models, which I found surprising for a single software suite, and - nowadays - one that is quite affordable (at least the home edition).

Mathematica seems a little pricey but maybe it would motivate me to learn more math.

I would love to read what non-mathematicians use MatLab, Mathematica, and Maple for.


Matlab and Mathematica are VERY different.

Matlab and Python are in the same ballpark. Easy syntax and large standard library. Matlab provides a lot more dedicated libraries for niche areas but the overall experience feels the same.

Mathematica doesn't really have a standard counterpart. Jupyter notebooks try to capture the experience but the support for symbolic expressions makes the Mathematica experience very different.


Matlab is fundamentally not a great language. It's not great at interacting with data that isn't already blocks of numbers, it's terrible for UI design, and even for matrix manipulation numpy is beating it nowadays. The main reason to use it is if you really need something in one of the toolboxes that doesn't have an open-source equivalent, or for Simulink. As a general data processing language it's far from the best default.

I'm a non-mathematician and I used it for lots of novel stuff - GIS, visualisations of all kinds, machine learning. The Wolfram Community staff picks is a great introduction into the varied things you can do: https://community.wolfram.com/content?curTag=staff%20picks

To be fair it was used a lot during my physics studies. I opted to use it afterwards for integrals and derivations, very powerful.

Yeah, I was one of those schmucks that used sympy / python instead of mathematica in my physics coursework. Policy was "mathematica is recommended and supported, but you can bring your own tools if you want to and can make them work."

In retrospect, doing the work in mathematica would have probably stretched my brain more (in a good way!) since it provides a different and powerful way of solving problems vs other languages...maybe I'll have to revisit it. Perhaps even try advent of code with it?

While python did get the job done, it feels like the ceiling (especially for power users) is so much higher in mathematica.


MatLab was taught and used extensively at my university, and has many strong sides and a fantastic standard library. We used it mainly for physics and robotics calculations. The licenses are (were?) prohibitively expensive outside of academia though. Hard to compete with free Python + NumPy and a larger talent pool.

I used mathematica for real last time in SGI days and loved it. I know probably a ton has changed since, but I do have to ask those that use it today if you'd still use it for non math-heavy (and even so) tasks if you have access to the wonderful world of python and jupyter / polars, R, and similar?

Mathematica is awesome for weird, one-off tasks in fields that I'm unfamiliar with, since the documentation is excellent, and the functionality is really broad (so I don't need to figure out how to install a specialty program for every one-off task). But for fields that I'm experienced with or tasks that I'm planning on running frequently, I'll usually just use Python, since most of the Python libraries have more functionality and run quicker than Mathematica.

(Mathematica is of course much better than Python at symbolic math, but this isn't what you are asking about)


> Nowadays I'd probably just ask Claude to figure it out for me

Incidentally, Mathematica + LLMs make a great combination. If you take what is pretty much the biggest mathematical routine library in the world and combine it with interactive visualization tools, and then use an LLM to accelerate things, it becomes an incredible tool. Almost ridiculously powerful for trying things out, teaching, visualizing things, etc.

(I've been using Mathematica since 1992 or so, so I'm familiar with the language, but it's still so much faster to just tell Claude to visualize this or that)


Yes, I’m sure! I had mostly stopped using WL by the time ChatGPT came out and so my only experience of their integration is a lot of hallucinated syntax by GPT3. I’ve not been tempted to upgrade to the newer, more LLM-integrated versions yet, particularly because I’ve been trying to cut down on expenses and spending more money on LLM api calls versus just using my existing Anthropic subscription for Claude code isn’t that appealing. I’ve also been going through a bit of a crisis of a lack of creativity and inventiveness and it’s hard to decide to spend money on an update when it doesn’t seem like I have any good ideas of what to do with it after I’ve spent the cash!

I bet that if they can integrate LLMs _really_ well (I’m not sure the chat driven notebook thing is necessarily the way) it’ll be a massive upgrade.


On my Max plan, Opus 4.5 is now the default model! Until now I used Sonnet 4.5 exclusively and never used Opus, even for planning - I'm shocked that this is so cheap (for them) that it can be the default now. I'm curious what this will mean for the daily/weekly limits.

A short run at a small toy app makes me feel like Opus 4.5 is a bit slower than Sonnet 4.5 was, but that could also just be the day-one load it's presumably under. I don't think Sonnet was holding me back much, but it's far too early to tell.


Right! I thought this at the very bottom was super interesting

> For Claude and Claude Code users with access to Opus 4.5, we’ve removed Opus-specific caps. For Max and Team Premium users, we’ve increased overall usage limits, meaning you’ll have roughly the same number of Opus tokens as you previously had with Sonnet. We’re updating usage limits to make sure you’re able to use Opus 4.5 for daily work. These limits are specific to Opus 4.5. As future models surpass it, we expect to update limits as needed.


It looks like they've now added a Sonnet cap which is the same as the previous cap:

> Nov 24, 2025 update:

> We've increased your limits and removed the Opus cap, so you can use Opus 4.5

> up to your overall limit. Sonnet now has its own limit—it's set to match your

> previous overall limit, so you can use just as much as before. We may continue

> to adjust limits as we learn how usage patterns evolve over time.

Quite interesting. From their messaging in the blog post and elsewhere, I think they're betting on Opus being significantly smarter in the sense of 'needs fewer tokens to do the same job', and thus cheaper. I'm curious how this will go.


wish they really bolded that part because i almost passed off on it until i read the blog carefully

instant upgrade to claude max 20x if they give opus 4.5 out like this

i still like codex-5.1 and will keep it.

gemini cli missed its opportunity again now money is hedged between codex and claude.


Very interesting! The one killer issue that jumps to mind is anti-cheat. I switched away from gaming on Linux via Proton to gaming on Windows because Battlefield 6's anti-cheat won't work under Proton. Many games are like this, particularly some of the most popular (Rainbow 6 Siege for instance). And BF6 made this decision only recently despite the growing number of Steam Deck players (and other players on linux - in fairness I don't think there would have been that many BF6 players on a handheld).

Edit: I specifically use a gaming-only PC. The hardware is used for nothing else. Hence, discussions of rootkits don't really bother me personally much and on balance I'd really rather see fewer cheaters in my games. I think it would be the same with any of these machines - anything Steam-branded is likely to be a 99% gaming machine and their users will only care that their games work, not about the mechanisms of the anti-cheat software.


I view it as Valve is doing me a favor by adding friction towards me installing a rootkit to play video games.

There's also been numerous userspace ACs that work well and also run in userspace (EAC, Battleye, etc.) that have been enabled for Linux/Proton users (including by EA with Apex Legends at one point). A lot of the support for Linux mostly comes down to the developer/publishers not wanting to and not because of technical reasons.


on the other hand you can't play any of the older battlefields due to cheating (not like "is he cheating?" more like blatant "this guy is speedhacking and heashotting everyone" cheating that the server could easily detect if they cared about it)


Is that the same Battlefields that removed support for private servers ?


There are hacks these days that sniff the pcie bus with an FPGA to mitm the ram, render out the game state, and render an overlay on top of the monitor.

It's a crazy arms race that IDK even kernel mode can compete with at the end of the day.

I think this shift away from community-led multiplayer is approaching a dead-end with respect to this hacking arms race.

Player bans and votekicks used to be so easy to do. And while there were some badmins, I argue it still resulted in an overall healthier multiplayer ecosystem.

OF course we know this shift is so the developer can control the game more tightly for monetization purposes. But i think the end result of this is more harm than good.


This is a issue of critical mass. With the continued growth of steamos, steamdeck, and linux as a game platform, eventually it will pull over support.


I have to wonder if it's possible to ever even guarantee something that can't be trivially bypassed on Linux - Windows, sure, it's possible with DMA, but it's damn hard. On Linux you could just compile a spoofed kernel or a DKMS module or something.


Look at android, locked bootloader, no root, se linux, and voila


It looks like Valve wants to avoid going down the road of an extremely locked down system like that. They even view the ability to load alternate OS's as a feature of their products.


They could offer both locked down signed software on top of their hardware and allow for bypass when the user wants to install their own thing. I prefer by default to have locked down signed chain of software bootstrapping but I do want to also have the ability to use my own.


It doesn't have to be bypassed. Those same anti-cheats used by many unsupported titles are enabled for some games and work fine on Linux. So you just have to give the developers some incentive to enable it for their titles. It is a choice made by game developers. Currently they don't see a market on Linux/Steam OS but if Steam Machines become popular, potentially they would be missing a market and decide to join in.


No, they don't work on Linux. They're borderline useless. The whole point of client side anti cheat software is to prevent players reading the game's memory or messing with the game's code. There's no practical way an anti cheat can stop someone on Linux because you can just compile a custom kernel that bypasses all the protections.

On Windows you can't do this, so you have to go through one of the known APIs that anti cheat software monitors or find exploits in kernel drivers to get in and poke at the game's address space. They also look for known vulnerable kernel drivers on boot and block loading the game if they find them.

Some anti cheats run on Linux, but they're borderline useless and trivial to bypass.

Unfortunately for anti cheat software to ever work on Linux would require signed and attested kernels and locked down OS software. Something that will never fly in the Linux ecosystem.


Game developers can ship an attested runtime (or hell, even an attested kernel) with the game and, refuse booting it unless the kernel passes some boot tests. Most Linux games already containerize their runtime anyways.

Locking down Linux totally is impossible, but the same very obviously goes for Windows and even macOS as well. Locking down a Linux runtime well enough to play online games seems trivial in my opinion. It's just a lot of work that would be better-spent preventing Windows hackers from pants-on-head insane DMA exploits.


you can make a signed readonly linux installation, and restrict your games to it. this would be like "support steamos but not linux".

Or deliver the game as a container format, like snap or appimage to bypass most of the system.

Or demand the installation of a kernel driver like they do on windows.

or just give up on kernel level aticheat since they're been breached all the same, just as windows are restricting their power too.

easy-anticheat has a linux version. Developers have to disable the support intentionally.


is it not possible for someone to have Linux spoof that it's Windows to the game?


I sincerely hope it doesn't happen then. I'd rather have game developers come up with a different solution that is not a rootkit


It's worse than that, BF6's anticheat is kernel level and requires the Windows-only version secure boot to be enabled, at least on my motherboard. There is no way I'm going to faff about with my BIOS when rebooting just to play this game.


I don't know how EFI boot works but I am running a gaming PC in dual boot and I have both Microsoft and my own personal secure boot keys loaded (for linux and grub)

I boot my own signed bootloader (grub) from which I can also boot Windows. Windows shows it is in secure boot mode and it works fine with BF6 for me.

But I have a feeling this allows users to run some bootkit/rootkit and bypass any of those kernel level anti-cheats. Maybe I'm wrong and EFI handover to Windows clears all the memory, but I somehow doubt it.


Perhaps a trusted execution environment based anti-cheat system could be possible.

I think Valve said something about working with anti-cheat developers to find a solution for the Steam Deck, but nothing happened. Perhaps they will do something this time.

With a TEE, you could scan the system or even completely isolate your game, preventing even the OS from manipulating it. As a last resort, you could simply blacklist the machine if cheats are detected.

There would probably still be some cheaters, but the numbers would be so low as to not be a problem.


Maybe the user friction would be too much, but I'd be happy for the system to just straight up reboot for games which require anti cheat. So while that game is running, the system is in a verified state. But once you close the game all of your mods and custom drivers can be loaded just fine.


Looking at the specs and marketing copy, it sounds to me like you could secure boot windows 11 on this machine.

> ... a discrete semi-custom AMD desktop class CPU and GPU.

> Yes, Steam Machine is optimized for gaming, but it's still your PC. Install your own apps, or even another operating system. Who are we to tell you how to use your computer?


I'd have Secure Boot, and then one root for an user-modifiable regular Linux installation, and another root that is read-only, signed, custom kernel etc.


All Valve has to do is say “Your software cannot deliberately exclude linux support including kernel anti-cheat to be listed on Steam.” And that would be that, the few devs big enough to make it on their own would leave, and everyone else would adapt.


Worth noting: Valve’s own first party tournaments for their own game require kernel level anti-cheat (from a third party vendor). Valve themselves have given up on allowing players in their own title play competitively in a Valve sponsored event with a kernel level anti-cheat. I can’t imagine they’d ever be this brash.

There is no adapting without a proper solution for securing game integrity.


You clearly are very misinformed on how Valve operates and runs the competitive CS2 environment.

Valve does not require a Kernel Level Anti-Cheat for "first party" tournaments. It is not stipulated anywhere in the Major Rulebook: https://github.com/ValveSoftware/counter-strike_rules_and_re...

The reason third-party anti-cheats are commonplace at these events is because most tournaments opt to use Faceit or similar for game scheduling. This was the case before VRS (with RMRs) and the TO could choose an anti-cheat of their choosing. This always ended up being Faceit AC or whatever platform the matches are scheduled via (For example, PGL used Challenger Mode, which used Akros Anti-Cheat). ESL of course uses Faceit because (ESL Faceit Group).

You do not understand how Majors are run. It is very hands off from Valve. Only recently, with the introduction of VRS has Valve started controlling and implementing dedicated rules into the ecosystem for TOs.


> The reason third-party anti-cheats are commonplace at these events is because most tournaments opt to use Faceit or similar for game scheduling. This was the case before VRS (with RMRs) and the TO could choose an anti-cheat of their choosing. This always ended up being Faceit AC or whatever platform the matches are scheduled via (For example, PGL used Challenger Mode, which used Akros Anti-Cheat). ESL of course uses Faceit because (ESL Faceit Group).

No it isn't. They're not using it by happenstance, because it is a feature of the platform, they're using it because it would not be competitively viable without it. PGL caught major flak for using Akros [0] because the tool was not good enough at the time to handle a Major qualifier. Just because something is not specified in the rulebook does not mean it is not de facto. Not a single Valve-sponsored major has ever lacked a third-party kernel anti-cheats, from the qualifiers (when they existed), to the VRS eligible events.

Yes, I am simplifying for the audience by calling them first-party. They're technically all contracted events on a tender process [1] (well, even TI is contracted out to PGL as of late).

The point still stands: events on Counter-Strike, with sponsored by Valve and with tight in-game integrations in the form of stickers, blog posts[2], and other advertisements, all rely critically on kernel-level anti-cheat for game integrity purposes.

Or to put it more succinctly: there is no viable pathway for a player to get their autograph into Counter-Strike 2 playing on Linux.

[0]: https://www.reddit.com/r/GlobalOffensive/comments/19499bu/ak...

[1]: https://www.hltv.org/news/40764/valve-sets-start-of-march-as...

[2]: Today's blog post for the Starladder Budapest Major: https://store.steampowered.com/news/app/730/view/57827633307...


The games would just leave Steam. The big publishers want their own platforms and launchers anyway.


The big publishers already have their own launcher and platforms and are increasingly moving back onto Steam because they see higher PC player counts and sales when their games are there


That's not the trend that we're observing. As much as publishers and developers want to control their sales channels, the current trend is for them to move towards Steam, not away from it.

The more likely outcome is that developers would segment matchmaking into people with kernel-level anti-cheat, and people without it. This seems fair to me.


Several big publishers did move away from Steam until Valve conceded some of their revenue, reducing their cut from 30% to 25/20% at certain revenue thresholds. That convinced the publishers to return to Steam, but it showed that Valve isn't immune to being flexed on by the bigger players.


Games can leave Steam, but whenever they do they run into the awkward issue that gamers aren't usually coming with them, at least not in numbers that justify trying to create your own thing.


Yeah, I would hope not. Trying to impose your will on suppliers and b2b customers like this is how you get hit with an antitrust lawsuit.


Ironically, this might create a perverse incentive to shift gamers to linux as all the hackers jump from windows to linux to take advantage of the lack of KMAC.


Is there an feasible alternative to "kernel anti-cheat" available on Linux?


There isn't.

When it comes to anti-cheat on Linux, it's basically an elephant in the room that nobody wants to address.

Anti-cheat on Linux would need root access to have any effectiveness. Alternatively, you'd need to be running a custom kernel with anti-cheat built into it.

This is the part of the conversation where someone says anti-cheat needs to be server-side, but that's an incredibly naive and poorly thought out idea. You can't prevent aim-bots server-side. You can't even detect aim-bots server-side. At best, you could come up with heuristics to determine if someone's possibly cheating, but you'd probably have a very hard time distinguishing between a cheater and a highly skilled player.

Something I think the anti-anti-cheat people fail to recognize is that cheaters don't care about their cheats requiring root/admin, which makes it trivial to evade anti-cheat that only runs with user-level permissions.

When it comes to cheating in games, there are two options:

1. Anti-cheat runs as admin/root/rootkit/SYSTEM/etc.

2. The games you play have tons of cheaters.

You can't have it both ways: No cheaters and anti-cheat runs with user-level permissions.


I don't fully agree with the 1 and 2 dichotomy. For example, before matchmaking-based games became so popular a lot of our competitive games were on dedicated servers.

On dedicated servers we had a self-policing community with a smaller pool of more regular players and cheaters were less of an issue. Sure, some innocents got banned and less blatant cheaters slipped through but the main issue of cheaters is when they destroy fun for everyone else.

So, for example, with the modern matchmaking systems they could do person verification instead of machine verification. Such as how some South Korean games require a resident registration number to play.

Then when people get banned (or probably better, shadowbanned/low priority queued) by player reports or weaker anti-cheat they can't easily ban evade. But of course then there is the issue of incentivizing identity theft.

And I don't think giving a gaming company my PII is any better than giving them root on my machine. But that seems more like an implementation issue.


> For example, before matchmaking-based games became so popular a lot of our competitive games were on dedicated servers.

I still had a lot of problems with cheaters during this time. And when the admins aren't on you're still then at the whims of cheaters until you go find some other playground to play in.

And then on top of that you have the challenge of actually finding good servers to go join a game with similarly skilled players, especially when trying to play with a group of friends together. Trying to get all your friends on to the same team just for the server to auto-balance you again because the server has no concept of parties sucked. Finding a good server with the right mods or maps you're looking for, trying to join right when a round started, etc was always quite a mess.

Matchmaking services have a lot of extremely desirable features for a lot of gamers.


Except most anti-cheats started on dedicated servers because it turns out most people are not interested in policing other players.

Punkbuster was developed for Team Fortress Classic, even getting officially added to Quake 3 Arena. BattleEye for Battlefield games. EasyAntiCheat for Counter-Strike. I even remember Starcraft 1 ICCUP 3rd party servers having an anti-cheat they called 'anti-hack'.

You can still see this today with modern dedicated servers in CS2: Face-It and ESEA have additional anti-cheat, not less. Even modded 3rd party server FiveM for GTAV has their own anti-cheat called adhesive.


I would argue a lot of the early anti-cheat was just as much about giving admins and communities better tools to police themselves as it was about automated cheat detection.

Like here's 2006 Punkbuster for Battlefield 2 (BEye might have been made for BF:V but Punkbuster was what I remember being used by servers). [1]

It automatically kicked on cheat detection but it didn't ban. It provided logs for admins to use for bans. It provided a way for admins to give community players the power to kick. It provided a player GUID based on CD key. It provided an online identity verification/registration system (though I don't remember anyone using this). It let admins take screenshots of players' screens.

[1] https://web.archive.org/web/20060515160425/http://www.evenba...


> So, for example, with the modern matchmaking systems they could do person verification instead of machine verification. Such as how some South Korean games require a resident registration number to play.

If you think the hate for anti-cheat is bad, just wait until you see the hate for identity verification.

I'm actually rather blown away that you would even suggest it.


Rootkit anti-cheats can still often be bypassed using DMA and external hardware cheats, which are becoming much cheaper and increasingly common. There's still cheaters in Valorant and in Cs2 on faceit, both of which have extremely intrusive ACs that only run on Windows.

At the level of privilege you're granting to play a video game, you'd need to have a dedicated gaming PC that is isolated from the rest of your home network, lest that another crowdstrike level issue takes place from a bad update to the ring 0 code these systems are running


I'm not letting a game company have root on my PC. How does that kind of exposure for something as frivolous as gaming even make sense?


Something that is "frivolous" to you is a passion or even a profession for others. Competitive gaming is a massive market worldwide, and it wouldn't exist without the ability to enforce a level playing field. Not everything has to be a holy FOSS war.


"holy FOSS war"?

Why not have a commissar sit behind every gamer to make sure they're not cheating?

That's a startling degree of access to give to these people for access to cosmetic micro-transactions.

But, I guess if all your friends are snorting coke in an alley, FOMO will have you right there with them.


Do you use a separate user to play games? If not it's kinda useless as a user space process can read all your files and memory of running processes of the same user.


That's how gaming on windows work. You're a minority with that opinion.


Even kernel anti-cheat can be defeated, this is a similar fight to what captchas have.

I can just have my screen recorded and have a fake input signal as my mouse/keyboard.. or just simply hire a pro player to play in my name, and it's absolutely impossible to detect any of these.

The point is to just make it more expensive to cheat, culling out the majority of people who would do so.


But isn't all client-side anti-cheat bypassable by doing image recognition on the rendered image? (either remote desktop or a hardware-based display cable proxy)


Modern cheats are far more advanced than this. Using a DMA cheat, you basically just read the game's memory from a different computer and there's no way for the game to know unless the PCI device ID is known: https://intl.anticheatexpert.com/resource-center/content-68....


DMA is "easy" to patch. No reason to allow a device to have arbitrary memory access. Just require use of IOMMU.

FaceIT essentially has countered most modern cheats including those using DMA. https://www.faceit.com/en/news/faceit-rollout-of-tpm-secure-...

Nowadays if memory access is needed, you are looking at having to find a way to load a custom BIOS or UEFI module in a way that doesn't mess with secure boot. Even then, certain anti-cheats use frequently firing interrupts to find any unknown code executing on any system threads.


Yes. Using another machine, record the screen & programmatically move mouse.

At that point you have to look at heuristics (assuming the input device is not trivially detectable vs a legit one).

However, that can obviously only be used for certain types of cheating (e.g. aimbot, trigger bot (shoot when crosshair is on person)).


3. write your codebase in a way which is suspicious of client data and gives the server much more control (easier said than done however)


That's just server-side anti-cheat, which I've already addressed.

Cheating isn't always about manipulating game state, especially in FPSes. There, it's more about manipulating input, ie, auto-aim cheats.


There's a third path:

3. No humans in your multiplayer

As someone who grew up amazed at Reaper bot for Quake, I'm surprised we don't see a rennaisance of making 'multiplayer' fun by more expressive, fallible, unpredictable bots. We're in an AI bubble and I don't hear of anyone chasing the holy grail of believable 'AI' opponents.

This also has the secondary benefit of having your multiplayer game remain enjoyable even when people's short attention spans move on to the next hot live service. Heck this could kill live service games.

Then again, what people get out of multiplayer is, on some unspoken and sad level, making some other person hurt.


There's just nothing like playing against other people. It's so dynamic and fun. Especially games like StarCraft. AI is just nowhere near as engaging.


> AI is just nowhere near as engaging.

A gazillion dollars of VC capital says otherwise!


Cheaters are increasingly sophisticated and hard to detect. It leads me to think if we put the effort in, we could emerge the same dynamism and fun, maybe even moreso.

If we can't fight 'em, join 'em?


the third option is cloud gaming for everyone.


Today, no. Very simplified but the broad goal of those tools is to prevent manipulation and monitoring of the in-process state of the game. Consoles and PCs require this to varying degrees by requiring a signed boot chain at minimum. Consoles require a fully signed chain for every program, so you can't deploy a hacking tool anyway; no anti-cheat is needed. PCs can run unsigned and signed programs -- so instead they require the kernel at minimum to be signed & trusted, and then you put the anti-cheat system inside it so it cannot be interfered with. If you do not do this then there is basically no way to actually trust any claim the computer makes about its state. For PCs, the problem is you have to basically trust the anti-cheat isn't a piece of shit and thus have to trust both Microsoft and also random corporations. Also PCs are generally insecure anyway at the hardware level due to a number of factors, so it only does so much.

You could make a Linux distro with a signed boot chain and a kernel anti-cheat, then you'd mostly need to get developers on board with trusting that solution. Nobody is doing that today, even Valve.

Funny enough, macOS of all things is maybe "best" theoretical platform for all this because it does not require you to trust anyone beyond Apple. All major macOS programs are signed by their developers, so macOS as an OS knows exactly where each program came from. macOS can also attest that it is running in secure mode, and it can run a process at user-mode level such that it can't be interfered with by another process. So you could enforce a policy like this: if Battlefield6.app is launched, it cannot be examined by any other process, but likewise it may run in a full sandbox. Next, Battlefield6.app needs to login online, so it can ask macOS to provide an attestation saying it is running on genuine Apple hardware in secure mode, and then it could submit that attestation to EA which can validate it as genuine. Then the program launch is trusted. This setup requires you to only trust Apple security and that macOS is functioning correctly, not EA or whatever nor does it require actual anti-cheat mechanisms.


I wonder what ever happened to all those AI based anti-cheat solutions that I heard about. Was that last year maybe?


Even without anticheat, ProtonDB has a lot of "gold" ratings it really shouldn't; the comments explain the real experience. See BeamNG and AOE2:DE.


Do we know what kernel SteamOS uses? Is it built on linux, or could it be some sort of kiosk'd mode Windows where this will be a non-issue? One could hope but I truly don't know.


SteamOS on the Deck is just a standard (tuned) Linux distribution under the hood. It would be very surprising to me if Valve shifted to an entirely different OS for the Cube.


Ahh cool, thanks


It is running Valve's immutable fork of Archlinux, you can find their source package mirror online.


Awesome, thanks!


Same. I mainline Destiny2 (well, a bit less these days), and Bungie won't support Linux/Steam Deck because they depend on BattlEye kernel anti-cheat.

(and yet still have a problem with cheaters, see all the bans following the Desert Perpetual raid race)


BattleEye is supported on Linux and Steam Deck, Bungie simply decided not to enable support for it. https://areweanticheatyet.com/?search=battleye&sortOrder=&so...


Some cursory clicking about didn't reveal to me the actual corpus they used, only that it is several trillion tokens 'divided across the languages'. I'm curious mainly because Irish (among some other similarly endangered languages on the list) typically has any large corpus come from legal/governmental texts that are required to be translated. There must surely be only a relatively tiny amount of colloquial Irish in the corpus. It be interesting to see some evals in each language particularly with native speakers.

I think LLMs may be on the whole very positive for endangered languages such as Irish, but before it becomes positive I think there's an amount of danger to be navigated (see Scots Gaelic wikipedia drama for example)

In any case I think this is a great initiative.


That tracks. I learnt Gaeilge Uladh growing up and standard Irish feels like reading or writing a legal agreement compared to the spoken word…


Can you provide a link about the “Scots Gaelic Wikipedia drama” you reference? I've heard of drama related to the Scots Wikipedia but that has nothing to do with Gaelic.


My apologies, it was the Scots Wikipedia, careless of me. Link for context: https://en.wikipedia.org/wiki/Scots_Wikipedia#Controversy


I have to wonder if the 'reward' of higher limits for setting the browser as the default is in conflict with the Digital Markets Act (although as I understand it, OpenAI isn't currently seen as a 'gatekeeper' under that act - but I would be surprised if that didn't change in the coming years).


I mean, this post is an hour old and the other only 5 minutes. Who's duping who here :)


I'm with you, but the other one comes from a favored account.


Oh well, I guess I will have to get my self-esteem from a different number on the internet!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: