This is cool. I'm observing a trend of "build a tiny version from the ground-up to understand it" a la Karpathy's micrograd/minGPT. Seems like one of the best ways to learn.
I switched my desktop from macOS (10+ years) to Ubuntu 25 last year and I'm not going back. The latest release includes a Gnome update which fixed some remaining annoyances with high res monitors.
I'd say it pretty much "just works" except less popular apps are a bit more work to install. On occasion you have to compile apps from source, but it's usually relatively straightforward and on the upside you get the latest version :)
For anyone who is a developer professionally I'd say the pros outweigh the cons at this point for your work machine.
> The latest release includes a Gnome update which fixed some remaining annoyances with high res monitors.
Interesting, I've had to switch off from Gnome after the new release changed the choices for HiDPI fractional scaling. Now, for my display, they only support "perfect vision" and "legally blind" scaling options.
By default Gnome doesn’t let you choose any fractional scaling in the UI because it has some remaining TODOs on that front. So from the UI you choose 100% or 200%. But the code is there and it works if you just open a terminal and type a command to enable this “experimental” feature.
Now whether or not this feature should have remained experimental is a different debate. I personally find that similar to the fact that Gmail has labeled itself beta for many years.
I've got the feature turned on. But Gnome 49 only supports fractional scaling ratios that divide your display into a whole, integer number of pixels. And they only calculate candidate ratios by dividing your resolution up to a denominator of 4.
So on my Framework 13, I no longer have the 150% option. I can pick 133%, double, or triple. 160% would be great, but that requires a denominator of 5, which Gnome doesn't evaluate. And you can't define your own values in monitors.xml anymore.
I switched in 1999. I've never really had any problems in all that time.
Although it was to BSDi then, and then FreeBSD and then OpenBSD for 5 years or so. I can't remember why I switched to Debian but I've been there ever since.
But what about laptops? I don’t use desktop machines anymore (last time was in 2012). Apple laptops are top notch. I use ubuntu as vm (headless) for software development tho
Not working with Linux is a function of Apple, not Linux. There is a crew who have wasted the last half decade trying to make Asahi Linux, a distro to run on ARM macbooks. The result is after all that time, getting an almost reasonably working OS on old hardware, Apple released the M4 and crippled the whole effort. There's been a lot of drama around the core team who have tried to cast blame, but it's clear they are frustrated by the fact that the OEM would rather Asahi didn't exist.
I can't personally consider a laptop which can't run linux "top notch." But I gave up on macbooks around 10 years ago. You can call me biased.
I just put Asahi on an M2 Air and it works so incredibly well that I was thinking this might finally be the year linux takes the desktop .. I wasn't aware of the drama w/Apple but I imagine M2 hardware will become valuable and sought after over M3+ just for the ability to run Asahi
The really sad thing is Alyssa Rosenzweig was doing Libreboots on potato ARM laptops a few years ago. Asus C201 if I remember correctly. Alyssa went on to create Panfrost, which was fucking incredible. Then Alyssa left freedom and started working on Asahi instead. Now Lenovo is shipping a bad ass ARM chromebook with benchmarks in the M2 macbook territory, and where did Alyssa go? To work for proprietary Intel. There's a song playing in my head right now, Stabbing Westward: The thing I hate.
Had Alyssa stuck with freedom, we would have had a very nice HP Chromebook x360 13b-ca0047nr, fully repairable, fully free cpu, gpu, and wifi, like a few years ago. 2016 Macbook Pro tier laptop, not at all shabby.
And now today, an even better Lenovo chromebook 3nm, 16GBs RAM, even a 50 tops NPU... but no. Alyssa had to go chase proprietary Apple. We have the hardware today. FSF could be selling fully free RYF ARM machines right now. Like FULLY free, all the way down to the EC, below the boot loader, below the CPU firmware even. But they aren't. The talent jumped ship for a soulless corporate paycheck.
I'm not faulting anyone for making a living either, I understand. But I'm pretty sure Alyssa was making a decent living with Collabra. Now Intel has their claws in, and will bury that brilliant developer in a back office doing miserable work. Whatever money Intel is paying, it wasn't worth the pride and impact that could have been made in software freedom.
It's just sitting right there. Victory is just laying there for someone to pick it up. But nobody with the talent is even trying now.
Best you can do is build a high end desktop at home and access it remotely with any laptop you desire. The laptop performance then becomes mostly irrelevant (even the OS is less relevant) and by using modern game streaming protocols you can actually get great image quality, low latency and 60+ fps. Though, optimizing it for low bandwidth is still a chore.
Have that desktop be reachable with SSH for all your CLI and sys admin needs, use sunshine/moonlight for the remote streaming and tailscale for securing and making sunshine globally available.
Bandwidth is not really a problem if you live in decent city. The problem is latency and data usage. 1 Hour streaming consumes GBs of data, that's a big problem if you use cellular network.
Latency is another problem, recently LTT video show that even as low as 5-10ms added latency can negatively impact your gaming performance, even if you don't notice. You begin to notice at around 20ms.
How is bandwidth not a problem if data usage on a cellular network is? You can dramatically lower your data usage by constraining bandwidth to say, ~2mbps, but doing so while keeping a decent image requires many sacrifices, like lowering resolution or using a software encoder that can squeeze out as much quality as possible out of 2mbps at a penalty for your latencies (won't matter much since you are already incurring latencies from your internet connection). You may also switch to a wi-fi hotspot once that's an option, and then even lift the bandwidth restrictions.
Regarding latency, this solution is meant as a way to use your notebook for any task, not just gaming. You can still play and enjoy most fps games with a mouse even at 20ms of extra latency, and you can tolerate much more when playing games with a gamepad. If you need to perform your best on a competitive match of cs2 you obviously should be on a wired connection, in front of a nice desktop pc (the very same you were using to stream to your notebook perhaps) and with a nice high refresh rate monitor. Notebooks are usually garbage for that anyways.
I did some investigation into this the other day. The short answer seems to be that if you like MacBooks, you aren't willing to accept a downgrade along any axis, and you really want to use Linux, your best bet today is an M2 machine. But you'll still be sacrificing a few hours of battery life, Touch ID support (likely unfixable), and a handful of hardware support edge cases. Apple made M3s and M4s harder to support, so Linux is still playing catch-up on getting those usable.
Beyond that, Lunar Lake chips are evidently really really good. The Dell XPS line in particular shows a lot of promise for becoming a strict upgrade or sidegrade to the M2 line within a few years, assuming the haptic touchpad works as well as claimed. In the meantime, I'm sure the XPS is still great if you can live with some compromises, and it even has official Linux support.
> Linux is still playing catch-up on getting those usable
This is an understatement. It is completely impossible to even attempt to install Linux at all on an M3 or M4, and AFAIK there have been no public reports of any progress or anyone working on it. (Maybe there are people working on it, I don’t know).
In his talk a few days ago, one of the main Asahi developers (Sven) shared that there is someone working on M3 support. There are screenshots of an M3 machine running Linux and playing DOOM at around 31:34 here: https://media.ccc.de/v/39c3-asahi-linux-porting-linux-to-app...
Sounds like the GPU architecture changed significantly with M3. With M4 and M5, the technique for efficiently reverse-engineering drivers using a hypervisor no longer works.
What I mean is: on a normal laptop, when you scroll with two fingers on the scroll wheel, the distance you scroll is nearly a continuous function of how much you move your fingers; that is, if you only move your fingers a tiny bit, you will only scroll a few pixels or just one.
Most VM software (at least all of it that I've tried) doesn't properly emulate this. Instead, after you've moved your fingers some distance, it's translated to one discrete "tick" of a mouse scroll wheel, which causes the document to scroll a few lines.
The VM software I use is UTM, which is a frontend to QEMU or Apple Virtualization framework depending on which setting you pick when setting up the VM.
I have the HP Zbook Ultra G1a. AMD 395+, 129GB RAM, 4TB 2280 SSD. Works great with Ubuntu 24.04 and the OEM kernel. Plays Steam games, runs OpenCL AI models. Only nit is it is very picky on what USB PD chargers it will actually charge on at all. UGreen has a 140W that works.
I've had Linux running on a variety of laptops since the noughties. I've had no more issues than with Windows. ndiswrapper was a bit shit but did work back in the day.
I haven't, because I buy hardware that's designed to work with Linux. But if you buy hardware that doesn't have Linux drivers, it just won't work. That might mean Wifi not working, it might mean a fingerprint reader not working, etc.
I don't have an x86 laptop at the moment so sticking with Macbook for now. My assumption is Mac laptops still are far superior given M-series chips and OS that are tuned for battery efficiency. Would love to find out this is no longer the case.
My HP ZBooks have been a dream. My current Studio G10 with an i9-13900 and 4070M has largely Just Worked™ with recent versions of both Fedora and Ubuntu.
HP releases firmware updates on LVFS for both the ZBook and its companion Thunderbolt 4 dock(!). They also got it Ubuntu certified, like most of their business laptops.
Again, I've had two 4k monitors on Linux for about ten years, and it has worked well the whole time. Back then I used "gnome tweak" to increase the size of widgets etc. Nowadays its built into mate, cinnamon, etc.
That's why I don't think Ubuntu is a newbie distro. You never have to compile for source on arch-based distros. Obviously plain arch isn't fit for beginners, but I would argue that something like endeavouros or cachyos is easier to use than Ubuntu. If you want to install something, you just run one command, and then it is installed, 99.99% of the time.
Did you start using Linux on the Mac hardware or on PC hardware? I have a late era Intel Macbook and was considering switching it to Ubuntu or Debian since it is getting kinda slow.
Not the OP, but I have a 2015 Macbook Pro and a desktop PC both running Linux. I love Fedora, so that's on the desktop, but I followed online recommendations to put Mint on the Macbook and it seems to run very well. However, I did need to install mbpfan (https://github.com/linux-on-mac/mbpfan) to get more sane power options and this package (https://github.com/patjak/facetimehd) to get the camera working. It runs better than Mac OS, but you'll need to really tweak some power settings to get it to the efficiency of the older Mac versions.
I switched to a new x86 machine. Running Linux on Mac just made things unnecessarily complicated and hurt performance. Im still open to using docker on Mac to run Linux containers but once you want a GUI life was simpler when I switched off.
I don't know if you can generally say that "LLM training is faster on TPUs vs GPUs". There is variance among LLM architectures, TPU cluster sizes, GPU cluster sizes...
They are both designed to do massively parallel operations. TPUs are just a bit more specific to matrix multiply+adds while GPUs are more generic.
What Raspberry Pi is to Broadcom (developer-friendly SBCs), Beagleboard is to TI.
It's a slightly different approach -- Beagleboard is a non-profit and emphasizes openly purchasable components. But similar in that it is the cheapest way to tinker with SoCs from that vendor.
I would expect nearly every active AI engineer who trained models in the pre-LLM era to be up to speed on the transformer-based papers and techniques. Most people don't study AI and then decide "I don't like learning" when the biggest AI breakthroughs and ridiculous pay packages all start happening.
I don't expect the majority of tech companies to want to run their own physical data centers. I do expect them to shift to more bare-metal offerings.
If I'm a mid to large size company built on DynamoDB, I'd be questioning if it's really worth the risk given this 12+ hour outage.
I'd rather build upon open source tooling on bare metal instances and control my own destiny, than hope that Amazon doesn't break things as they scale to serve a database to host the entire internet.
For big companies, it's probably a cost savings too.
I think that prediction severely underestimates the amount of cargo culting present at basically every company when it comes to decisions like this. Using AWS is like the modern “no one ever got fired for buying IBM”.
That's cool, any success stories, challenges or other feedback you can share?
I've only heard of people using Coral PCIe / USB for edge image AI processing tasks like classifying subjects in a stream. Curious if you have the same use case or something different!
I'm trying to make a DIY security camera that can run local models, and stream video over wifi.
The TI SDK makes it easy to run demos but making any custom apps quickly gets complicated unless you are familiar with embedded Linux dev, Yocto, etc. Certainly much more complex than iOS/Android.
Hopefully over time the tools for embedded can catch up to mobile.
I didn’t downvote you, but the “this tool is bad and if you take the time to argue with me you’re a Rust cultist” line is a bit tiresome. Damned if you do, damned if you don’t.
It’s a bit like if anyone who said you should switch to desktop Linux got yelled at for being in the pocket of Big Systemd.
“if you like it, use it” is well and good, but haranguing people who explain why they like/use what they use is just as lame as the purported cult defense of uv or whatever tool is popular.
I dunno man, fads and stupid fixations happen in software sometimes, but most of the time hyped tools are hyped because they’re better.
There's still relevance in making it stupidly easy to make an LED blink and make basic apps on circuit boards. Education + weekend hardware hackers might look for something different in a framework than a professional.
But certainly for pro use cases the hardware specific frameworks are way more powerful (but also complex).
The native AVR libraries are really good. It's not quite as idiomatic as Arduino, but it's really not all that different.
Beginners can learn frameworks more complicated than Arduino and I think they should. Before Arduino, beginners were expected to write plain C or assembly, and the industry got along just fine. There were still countless hackers and weekend tinkerers. They just had to learn more, which is not a bad thing
If by native AVR, you mean avr-libc, it's nothing at all like Arduino.
Instead of analogRead, you need to write your own busy loop watching certain bits in a register (or ISR), you need to twiddle bits in several registers to set up the ADC the way you want it, etc.
Serial.write? Nope, gotta read the docs, twiddle some bits again, and then you actually do get to use printf.
Those two right there are big hurdles to someone new to microcontrollers. In fact, they're a hurdle to me and I've read AVR datasheets for fun.
Nobody has done embedded MCU programming as simple as Arduino. There is so much open source code libraries in the Arduino Ecosystem to do almost anything that much of your programming becomes plug-and-play and accessible to all. You can then ship it as long as your power/performance budgets are met.
A few years ago they went professional with their "Arduino PRO" industrial hardware and a good Cloud IoT platform. Again they gave you a simple software interface to easily program your nodes and add them to your own IoT application/services.
I think Qualcomm has a winner on their hands here if they can encompass all their offerings within the Arduino Software Ecosystem so any hobbyist/maker/developer/professional can easily develop applications/systems.
I think these things are entirely reasonable for a beginner to learn about. It teaches you about the machine, about the very real cost of a UART write. That saves you from inevitably spending hours and days to figure out that too many printf is what's making your program slow.
A beginner should be introduced to the processor, not C++ or python abstractions. Those abstractions are good and useful in the general sense, but you really should be aware of what your abstractions actually do to the physical processor.
>
There's still relevance in making it stupidly easy to make an LED blink and make basic apps on circuit boards. Education + weekend hardware hackers might look for something different in a framework than a professional.
This group is has been moving to circuitpython, which is much less performant, but even easier to use. The more serious cross-platform development environments, like Zephyr, have also become much better.
reply