While I agree with you, I find that sometimes the “experience” can improve.
The most common “artifact” of AV1 is to make things slightly more blurry for example. A common H.265 artifact is “blockiness”. I have re-encoded H.265 to AV1 and not only gotten smaller files that playback better on low-end hardware but also display less blockiness while still looking high-resolution and great colour overall.
I always encode 10 bit colour and fast-decode for re-encoding to AV1, even if coming from an 8 bit original.
But then you look at flashback scenes and wonder where the noise has gone.
A lot of movies have purposeful noise, blurriness, snow, and fake artifacts to represent flashback scenes. One level of compression often keeps them okay-ish (like you can tell side by side that it's different, but only when you know what to look for). But these are the scenes that get especially ruined by two layers of compression.
I have been a big fan of nuclear for decades. But why now?
Solar with battery storage is about to be so inexpensive and rapid to deploy that perhaps 100% of new capacity should be added this way.
Start with the solar arrays and then add the batteries. They will add to the max immediately. While you are deploying the solar, batteries will improve.
With batteries, you can use solar power even at night.
Lithium batteries are already cheap enough. Sodium is going to be even cheaper and much safer to boot.
> Solar with battery storage is about to be so inexpensive and rapid to deploy that perhaps 100% of new capacity should be added this way.
Exactly because solar is so inexpensive, it means the private sector does not need government help. Utilities do add a lot of solar power themselves, see for example [1]: 52% (32.5 GW) solar, 29% (18.2 GW) battery storage, 12% (7.5 GW) wind for 2025.
I think because of capacity. This race is mainly driven with the AI power demand estimated to increase 10x in the next 5 years. Currently it's 5GW and by 2030 it is expected to be 50GW.
Is this taking efficiency gains into account? I would expect 10X efficiency increase every 3 years given Moore's Law and the hardware appropriate algorithms tendency.
There's not a lot of grift in solar and batteries, it's too easy to acquire and deploy. There is literally limitless ability for grift with nuclear.
Look no further than Trump's media corporation merging with a "fusion reactor" company. What do they have in common? Absolutely nothing, but it's an excellent conduit for bribes and fraud, and a way for Trump to send our tax dollars directly into his own pocket!
Some in the Free Software community do not believe that making it harder to collaborate will reduce the amount of software created. For them, you are going to get the software and the choice is just “free” or not. And they imagine that permissively license code bases get “taken” and so copyleft licenses result in more code for “the community”.
I happen to believe that barriers to collaboration results in less software for everybody. I look at Clang and GCC and come away thinking that Clang is the better model because it results in more innovation and more software that I can enjoy. Others wonder why I am so naive and say that collaborating on Clang is only for corporate shills and apologists.
You can have whatever opinion you want. I do not care about the politics. I just want more Open Source software. I mean, so do the others guys I imagine but they don’t always seem to fact check their theories. We disagree about which model results in more software I can use.
I am not as much on the bandwagon for “there is no lack of supply for software”.
I think more software is good and the more software there is, the more good software there will be. At least, big picture.
I am ok with there being a lot of bad software I do not use just like I am ok with companies building products with Open Source. I just want more software I can use. And, if I create Open Source myself, I just want it to get used.
The default should be the device's native blocksize, and some devices misreport. You also lose performance if you use a larger blocksize than necessary.
If we can, I'd like to get a quirks list in place, but there have been higher priorities.
Do each of the other filesystems have their own quirks list? That seems suboptimal. Oh, I guess it's because it's in the user space mkfs tool of each, not the kernel.
Still changing the on disk format as required, but we're at the point now where the end user impact should be negligible - and we aren't doing big changes.
Just after reconcile, I landed a patch series to automatically run recovery passes in the background if they (and all dependents) can be run online; this allows the 1.33 upgrade to run in the background.
And with DKMS, users aren't having to run old versions (forcing a downgrade) if they have to boot into an old kernel. That was a big support issue in the past, users would have to run old unsupported versions because of other kernel bugs (amdgpu being the most common offender).
bcachefs was always a module. You don’t want it in your kennel if you are not using it. The difference is that it used to ship in the mainline source code and be built as a module that was already built and on your drive.
If you build bcachefs as a module yourself (via DMKS or directly), it works the same as if you got it with your distro.
If you use bcachefs as root, the danger is booting with a kernel that lacks the module.
I hate that bcachefs is not in the kernel, and my primary distro does not use DKMS. But, if you can get a module built, there is no loss of functionality or performance.
There is freedom to and freedom from as they say in The Handmaid’s Tale.
reply