You can run AI models on unified/shared memory specifically on Windows, not Linux (unfortunately). It uses the same memory sharing system that Microsoft originally had built for gaming when a game would run out of vram. If you:
- have an i5 or better or equivalent manufactured within the last 5-7 years
- have an nvidia consumer gaming GPU (RTX 3000 series or better) with at least 8 GB vram
- have at least 32 GB system ram (tested with DDR4 on my end)
- build llama-cpp yourself with every compiler optimization flag possible
- pair it with a MoE model compatible with your unified memory amount
- and configure MoE offload to the CPU to reduce memory pressure on the GPU
then you can honestly get to about 85-90% of cloud AI capability totally on-device, depending on what program you interface with the model.
And here's the shocking idea: those system specs can be met by an off the shelf gaming computer from, for example, Best Buy or Costco today and right now. You can literally buy a CyberPower or iBuyPower model, again for example, download the source, run the compilation, and have that level of AI inference available to you.
Now, the reason why it won't work on Linux is that the Linux kernel and Linux distros both leave that unified memory capability up to the GPU driver to implement. Which Nvidia hasn't done yet. You can code it somewhat into source code, but it's still super unstable and flaky from what I've read.
(In fact, that lack of unified memory tech on Linux is probably why everyone feels the need to build all these data centers everywhere.)
> Now, the reason why it won't work on Linux is that the Linux kernel and Linux distros both leave that unified memory capability up to the GPU driver to implement. Which Nvidia hasn't done yet. You can code it somewhat into source code, but it's still super unstable and flaky from what I've read.
> the Linux kernel and Linux distros both leave that unified memory capability up to the GPU driver to implement
Depends on if AMD (or Intel, since Arc drivers are supposedly OSS as well) took the time to implement that. Or if a Linux based OS/distro implements a Linux equivalent to the Windows Display Driver Model (needs code outside of the kernel and specific to the developed OS/distro to do).
So far, though, it seems like people are more interested in pointing fingers and sucking up the water of small town America than actually building efficient AI/graphics tech.
> This is the best kind of open source trickledown.
We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.
They have to abide by the Wine license, which is basically GPL, so unless they’re going to make their own from scratch, they can’t make the bread and butter of their compat layer proprietary
There is absolutely nothing harmful about permissive licenses. Let's say that Wine was under the MIT license, and Valve started publishing a proprietary fork. The original is still there! Nobody is harmed by some proprietary fork existing, because nothing was taken away from them.
It's a little more nuanced than that. Software and gained freedoms survive not because they exist, but because they are being actively maintained. If your original, never-taken-away software does not get continually maintained, then:
* It will slowly go stale, for example, it may not get ported to newer, increasingly expected desktop APIs.
* It will lose users to competing software (such as your proprietary fork) which are better maintained.
As a result, it loses its relevance and utility over time. People that never update their systems can continue using it as they always have, assuming no online-only restrictions or time-limited licenses. But to new use cases and new users, the open software is now less desirable and the proprietary fork accumulates ever more power to screw over people with anti-consumer moves. Regulators ignore the open variant due to its niche marketshare, increasing the likelihood of things going south.
Harm can be done to people who don't have alternatives. In order to have alternatives, you need either a functioning free market or a working, relevant, sufficiently usable product that can be forked if worse comes to worst. Free software can of course help in establishing a free market, it isn't one or the other.
If a proprietary product takes over from one controlled by the community, much of the time it's not a problem. It can be replaced or done without.
If a proprietary platform takes over from one controlled by the community, something that determines not only how you go about your business but what other people expect from you, everyone gets harmed. The problem with a lot of proprietary software is that every company and their dog wants their product to become a platform and reshape the market to discourage alternatives.
MIT by itself does no harm. If it works like LLVM and everyone contributes because it makes more sense than developing a closed-off platform, then great! If it helps to bootstrap a proprietary market leader while the originally useful open original shrivels away into irrelevance, not as great.
A decade or two ago Wine was on permissive license (MIT I think). When proprietary forks started appearing, Codewavers (which employs all the major Wine contributors) relicensed it as GPL.
It's harmful to the ecosystem, because the reason so many Linux drivers, and Wine contributions, and a lot of other things are free software today is because of the GPL
From what I read, it was a lot of the prosumer/gamer brands (MSI, Gigabyte, ASUS) implementing their part of sleep/hibernate badly on their motherboards. Which honestly lines up with my experience with them and other chips they use (in my case, USB controllers). Lots of RGB and maybe overclocking tech, but the cheapest power management and connectivity chips they can get (arguably what usually gets used the most by people).
And my wife's Macbook Air wakes itself up again if it tries to suspend when connected to a Dell monitor. Apple has plenty of bugs too, and only Apple can fix them.
This is a part of Secure Boot, which Linux people have raged against for a long time. Mostly because the main key signing authority was Microsoft.
But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.
Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.
The goals of the people mandating Secure Boot are completely opposed to the goals of people who want to decide what software they run on the computer they own. Literally the entire point of remote attestation is to take that choice away from you (e.g. because they don't want you to choose to run cheating software). It's not a matter of "no one stepped up"; it's that Epic Games isn't going to trust my secure boot key for my kernel I built.
The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.
Is there any even theoretically viable way to prevent cheats from accessing a game you're running on a local machine without also disabling full user control of your system?
I suppose something like a "reboot into '''secure''' mode" to enable the anti-cheat and stuff, or maybe we'll just get steamplay or whatever where literally the entire game runs remote and streams video frames to the user.
> anti-cheat far precedes the casinoification of modern games.
> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.
There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.
There are plenty of locked down computers in my life already. I don't need or want another system that only runs crap signed by someone, and it doesn't really matter whether that someone is Microsoft or Redhat. A computer is truly "general purpose" only if it will run exactly the executable code I choose to place there, and Secure Boot is designed to prevent that.
I don't know overall in the ecosystem but Fedora has been working for me with secureboot enabled for a long time.
Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.
Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.
I'm not giving game ownership of my kernel, that's fucking insane. That will lead to nothing but other companies using the same tech to enforce other things, like the software you can run on your own stuff.
Oh yeah for sure. Linux is amazing in a computer science sense, but it still can't beat Windows' vertically integrated registry/GPO based permissions system. Group/Local Policy especially, since it's effectively a zero coding required system.
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
Debian (and thus Ubuntu) has full support for automated installs since the 90's. It's built into `dpkg` since forever. That include saving or generating answer to install time questions, PXE deployment, ghosting, CloudInit and everything. Then stuff like Ansible/Puppet have been automating deployment for a long time too. They might have added yet another way of doing it, but full stack deployment automation has been there for as long as Ubuntu existed.
> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.
2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.
3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.
There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).
> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.
Yeah, I have been working on the RHEL and Fedora installer since 2013 and already back then it had a long history almost lost to time - the git history goes all the way back to 1999 (the history was imported from CVS, as it predates Git) and that actually only cover the first graphical interface - it had automated installation support via kickstart and a text interface long before that, but the commit history has been apparently lost. And there seems to have been even some earlier distict installer before Anaconda, that likely also supported some sort of automated install.
BTW, we managed to get the earlies history of the project written down here by one of the earliest contributors for anyone who might be interested:
Note how some commands were introduced way back in the single digit Fedora/Fedora Core age - that was from about 2003 to 2008. Latest Fedora is Fedora 43. :)
Not an implementer of group policy, more of a consumer. There are 2 things that I find extremely problematic about them in practice.
- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?
- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.
That's not a problem with group policy. You're just complaining that GPO is not omnipotent. That's out of scope for group policies mate. You win, yeah yeah.... Bye
> Element has ended up focusing on digitally-sovereign govtech (https://element.io/en/sectors) in order to prevail, and it's left a hole in the market.
And unless they have verifiable testimonials, I'd take their reach with a grain of salt. Anyone can plaster a bunch of public domain government and defense logos all over their website.
"Secure" in the sense that they can sue someone after the fact, instead of preventing data from leaking in the first place.
reply