On mobile it took me three tries to find how to get from "Dungeon Crawler" to "Books Like Dungeon Crawler Carl" - you might consider surfacing those lists outside the "what is this book about" collapsible.
I can't speak to the rest of the text or the laptops themselves, but as someone who works with color reproducibility in video and print, those photos comparing colors of two different screens are worse than useless.
Uncalibrated screens photographed at different angles in different lighting conditions are not a valid basis for comparison. If you want properly calibrated displays, you need to purchase hardware (datacolor makes one such device) and calibrate them.
Even "factory-calibrated" monitors will benefit from this, because the quality of that calibration varies widely and your color reproduction is going to vary based on ambient lighting conditions etc.
The photos are just meant to illustrate the difference to the reader, not to be anything scientific. Of course manual calibration is ideal, but having a somewhat sensible default calibration isn't much to ask for and is in fact something many other laptops do just fine.
Problem is, display profile support for Wayland has been, at best, spotty until recently - and, there should be multiple accurate targets available on any good display panel.
My factory-seconds F13 (using 11th-gen Intel, still the best in terms of power savings) shipped with the older glossy display, which had a known, disclosed-as-cheaper LUT issue at lower brightness settings. After a couple of calibration rounds, it is spot-on and my go-to PC laptop.
Decent keyboard, too.
Of course, things are often more expensive in Europe (compared to the US) for zero good reason, so the F16 will always be at a proportional disadvantage compared to the F13. You may find that a much better fit.
I am aware that most people don't have any idea of what "display calibration" is actually about (which is primarily about display profiling), but the observation that the "The colors of the display are overly saturated, with reds in particular looking more intense than they should." seems to be to be a fundamental misunderstanding of what is happening here.
The framework 16 has a screen that is more capable of displaying reds than either of your two comparison screens (X1 Carbon 2019 seems to have a sub-sRGB gamut, while the Eizo CS2740 seems to be designed to match AdobeRGB [which has a red primary that matches sRGB]).
This is by design, as framework claims 100% DCI-P3 gamut coverage (which has a more saturated red primary than sRGB/AdobeRGB).
In terms of red saturation, the framework monitor is literally displaying superiority over the other two by demonstrating the capability to show more colours, yet it is being framed in a negative light here as being something that is "over saturated".
The responsibility to dictate how much of the display's capabilities (i.e. red saturation) to use to lies squarely in the software (and their associated colour-management systems), which require a display profile (ICC) that accurately models the display's capabilities (profiling), and thus allows colour-managed applications to appropriately scale their source colourspace values into the target display colourspace values. These display profiles are generated via colorimeters or spectrophotometers using specialized software.
Once an appropriate profile is loaded (for each screen), the output image should look identical on all screens that are capable of displaying the colours in the image (e.g. in an sRGB case, all three screens show show the same image, save for maybe the X1 Carbon being slightly desaturated).
Correspondingly, attempting to display an image with a DCI-P3 space (that fully utilizes that space) will cause undersaturation on both your X1 Carbon and the Eizo CS2740 (i.e. the ability to show more red saturation is strictly a plus).
If your critique lies in the fact that framework laptop does not ship an appropriate ICC profile for their monitor, then fair enough.
But I don't agreement with the statement that "somewhat sensible default calibration isn't much to ask for and is in fact something many other laptops do just fine."
I don't believe many laptop manufacturer's ship reasonable ICC profiles at all, and mostly just rely on either the consumer liking the oversaturated look or by having their panels only be rated for around sRGB where implicit colour management (i.e. doing absolutely no colour management and having it work merely because the source and the target are the same space).
It is entirely possible that you do understand all of this and I'm making assumptions about potential misconceptions where none exist.
However, you seem to have alluded to using Firefox as your main browser (which is not colour-managed by default) and your Eizo CS2740 being "properly calibrated (at the hardware level at least)," which to be suggests that you might be susceptible the misconception that I have pointed out.
If this is not the case, then I deeply apologize.
Thank you for mansplaining what color calibration and accuracy means, but I'm well aware of how it works due to my background in photography and having spent plenty of time calibration displays in the past.
In particular, there's a big difference between "can show more colors" and "shows the same colors but overly saturated".
The Framework 16 suffers from this by default, something that's quite obvious when comparing it by looking at photos for which you know what the actual colors look like, something I did do but didn't cover in the article.
Whether this is because the display operates in a different colorspace by default (e.g AdobeRGB) or not I don't know, but there's at least no option for it anywhere in the BIOS that I could find.
Claiming the Framework is superior over a monitor literally meant for color grading and photography is laughable to be honest, and seems to suggest you interpret display quality as "more intense is better".
I can add anecdata for the factory profile being very over-red - it's quite obvious out of the box. Not as bad as many Samsung OLED phones you see in stores (typically set to some crazy "enhanced" mode), but it's certainly closer to them than a calibrated screen.
One thing that has bugged me for a while though: why isn't it possible to make my own color profile by hand? Everything seems to imply that you can only get a profile definition file from a calibration device, and I don't have one... but I can eyeball it significantly better than the default profile. Is there something software out there that will let me adjust my curves, like the OS already does with night-mode color balance changes?
Really cool project! I like the twelve color palette the author presents, and the grayscales, but I'd love to see more about the choice to use twelve bits to encode it. Presumably enough of the rest of the possibility space is needed to justify writing a custom encoding. Or maybe it was done because custom color encodings are cool, which they definitely are.
The palette is so pretty, I wonder how the whole LCH color space quantized down to 4096 colors looks. I find limited bitdepth color spaces fascinating to look at, there's so many choices about how to represent color they can look wildly different.
Yea, I too was not expecting a list of past benchmarks. If not the aforementioned actual human deaths, I had expected either a list of companies whose pivot to AI/LLMs led to their downfall (but I guess we're going to need to wait a year or two for that) or a list of industries (such as audio transcription) that are being killed by AI as we speak.
We really do live in interesting times. Usually I feel pretty confident about predicting how a trend will continue, but as it is the only prediction I can make with confidence for this latest AI research is that it is and will be used by militaries to kill a lot of people. Oh, hey, that's another thing this article could have listed!
Outside of that, all bets are open. Possible wagers include: "Turns out to be mostly useful in specific niche applications and only seemingly useful anywhere else", "Extremely useful for businesses looking to offset responsibility for unpopular decisions", "Ushers in an end to work and a golden age for all mankind", "Ushers in an end to work and a dark age for most of the world", "Combines with profit motives to damage all art, culture, and community", etc etc.
I know many folk have strong opinions one way or the other, but I think it's literally anyone's game at this point, though I will say I'm not leaning optimistic.
As much as I'd love a long-term solution to dental cavities, I'm leery of any treatment using silver nanoparticles, which can cross the blood brain barrier and accumulate in the brain, where they've been shown (in mice and in human models) to contribute to neurodegenerative diseases.
Absolutely. The minimal mesh is just 3 triangles rendering a 3D texture that contains depth information that gets raytraced in a shader. Or you could send a single point vertex over and have a geometry shader generate the geometry vertices for you.
Or you could reduce the texture complexity to a 2D texture without raytracing if you defined the coin mathematically in the shader. Heck, you could get rid of the texture entirely and just do everything with math in the shader.
3D graphics used to require rigid hardware distinctions between vertices and textures and etc, but in modern pipelines it's pretty much all data and the programmer decides what to do with it.
I was going to argue that remembering "CorrectBatteryHorseStaple.com" is leagues easier than remembering an 128 bit number, but then I did the math and using the 20k most common english words to encode 128bits you'd need an 8 word phrase, which pretty much solidifies the point you've made.
I mean, people can recite poetry far longer, even nonsense poetry ("Twas brillig, and the slithy toves / did gyre and gimble in the wabe") so it's not impossible. But generally speaking, yeah. Nobody wants to navigate to that bit of jabberwocky instead of google.com
It does kind of make me long for the days of the HOSTS file, where anyone might alias a hard-to-remember address as something locally meaningful.
This is true, but LastPass proved that by the time the worst case occurs it's already too late. A security breach means, at minimum, redoing all your passwords, and these sites are a very compelling target.
OTOH I wouldn't want to self-host because I know I'm not going to spend the same amount of time and effort a full security staff would, even if my self-hosted box would make a much less attractive target.
You have security options self hosting that a big host does not.
Want to just encrypt everything on a node with no network access? Sure. That doesn't work for a "real" host but that is fine if you mostly use your phone and need to just occasionally sync your passwords back at home.
You don't need the things that make hosting hard. You can have a few hours of downtime. You password vault is gigabytes, not hundreds of terabytes. You don't need to arm guard your backups, just pass them (encrypted) to a friend with a safe.
Does bitwarden work if server is offline? I know the client works without internet connection but server outage had an issue earlier last year https://news.ycombinator.com/item?id=32782386
I self-host Vaultwarden. I'm sure someone will be happy to explain to me how foolish my implementation is, but I'm comfortable with it from a security perspective.
I run it as a Docker instance on my home Synology NAS. This turned out to be pretty easy to do. The only part that was a slight hassle was buying a cert, creating an FQDN and making the DNS entries to get an SSL connection to the NAS. Also, I wish updating to a new version of Vaultwarden was a little more straightforward.
When I am at home, my devices with Bitwarden all sync to the Vautwarden instance on the NAS without issue.
My router is a Ubiquiti UDMPro. I have an L2TP VPN configured with a shared-secret and user passwords that are ridiculously long and complex. When I'm out and about and need to sync with the NAS from my laptop or mobile device, I activate the VPN and do the sync.
My Ubiquiti account does have 2FA.
I implemented all this when 1Password informed me that in order to continue using their service, my vault would have to be hosted on their server and I would have to pay them every month for the privilege. That was a nonstarter.
I'm sure my router and NAS are not impenetrable, but I don't feel like I'm low-hanging fruit either. And if someone went to the trouble of breaking in, their reward would be one guy's vault and not the vaults of millions of customers. I'm hoping that makes me a less attractive target. Of course the vault itself has a very long and complex password as well.
This is working out quite well for me so far, knock on wood.
I have a very similar self-hosted Vaultwarden set up, for the same reasons.
My other concern, which may be unfounded is that Vaultwarden [1], which is an unofficial Rust rewrite, may also be developed to different, or lesser security standards than the official client. However I don't have any real reasons to suspect this.
Agreed. I know I'm taking it on faith that this implementation is robust and secure when it might not be. However, I feel okay about it knowing that it would be very difficult for anyone other than me to access this Docker instance in the first place. And if I'm outside my home network, I'm interacting with it via the VPN.
> Note that Synology DSM has built-in Let's Encrypt support
Yes... I tried going down that route. In my scenario, I'm accessing the NAS via its internal IP which is in an RFC1918 subnet. Let's Encrypt insists that you use a globally routable IP. If I used the public IP issed to me by my ISP, then I would have to map a port on my router and expose the NAS directly to the Internet. No way am I doing that.
I bought a cert through Namecheap and got 5 years for $29.95. That seemed quite reasonable to me. There was no problem getting it to work when I mapped the hostname to the NAS's internal IP. The only downside is that I have to go through a renewal process every year and install the updated cert on NAS. Not a huge deal; just one more thing I have to do.
That all makes sense. Wanted to point out to others that there's potentially less of a hassle to set this up (if you're fine with opening port 80, as has been pointed out to me).
Unfortunately, HTTP challenge only. I.e. you have to open port 80 to your Synology, which is handled by the same nginx instance, as all the other services on the device.
> A security breach means, at minimum, redoing all your passwords
Not necessarily. I wouldn't have felt compelled to redo all my passwords if 1Password's encrypted vaults were stolen the way LastPass's were, given that 1P's vaults are uncrackable with brute force but LastPass's critically depend on the entropy of the master password. This was discussed recently:
I'm actually really excited for Lenovo's upcoming T1 wearable display. It's a monitor in glasses form, not VR, but I realized that the appeal of VR desktop environments for me is having a large screen, not the whole VR shtick.
Couple that display with output from my phone running JupyterLab, and I'll have a desktop-like development environment that fits in my pocket. Perfect for travel and long backpacking trips.
Before Creative Cloud became subscription only, you could budget the software as a one-time fee; $2500 for their entire suite of software, $600 or so for just Photoshop, iirc.
With the subscriptions, you're paying $600+ a year, every year, in perpetuity, and you're subjected to invasive DRM that'll do shit like delete fonts you haven't used recently.
Before subscriptions, upgrades were often cheaper than buying new, and you might only buy an upgrade every four or five years (the core requirements of a graphics app don't change majorly, so new versions are more about stability, optimizations, and the occasional niche feature).
... thankfully, there's enough competition in this space that I've managed to completely remove Adobe from my toolset, but it still sucks that I can't even activate my 2013 version of Adobe's Creative Suite (CS6) because the licensing servers no longer function.
Sounds like you're saying graphic designers no longer need Adobe and if they do need it then they probably gain a materially benefit from it. I never suggested Adobe products were cheap, or a good deal.