Hacker Newsnew | past | comments | ask | show | jobs | submit | kllrnohj's commentslogin

> given that we already had a working GUI. (Maybe that was the intention.)

Neither X11 nor Wayland provide a GUI. Your GUI is provided by GTK or QT or TCL or whatever. X11 had primitive rendering instructions that allowed those GUIs to delegate drawing to a central system service, but very few things do that anymore anyway. Meaning X11 is already just a dumb compositor in practice, except it's badly designed to be a dumb compositor because that wasn't its original purpose. As such, Wayland is really just aligning the protocol to what clients actually want & do.


> Most systems have a way to mostly override the compositor for fullscreen windows and for games

No, they don't. I don't think Wayland ever supported exclusive fullscreen, MacOS doesn't, and Windows killed it a while back as well (in a Windows 10 update like 5-ish years ago?)

Jitter is a non-issue for things you want vsync'd (like every UI), and for games the modern solution is gsync/freesync which is significantly better than tearing.


> I don't think Wayland ever supported

Isn't that true for even the most basic features you expect from a windowing system? X11 may have come with everything and the kitchen sink, Wayland drops all that fun on the implementations.

GNOME does unredirect on Wayland since 2019: https://www.reddit.com/r/linux/comments/g2g99z/wayland_surfa...

> Windows killed it

They replaced it with "Fullscreen Optimisations", which is mostly the same, but more flexible as leaves detection of fullscreen exclusive windows to the window manager.

https://devblogs.microsoft.com/directx/demystifying-full-scr...

As far as I can find the update removed the option to turn this of.


If you forget to handle a C++ exception you get a clean crash. If you forget to handle a C error return you get undefined behavior and probably an exploit.

Exceptions are more robust, not less.


Yeap. forgetting to propagate or handle an error provided in a return value is very very easy. If you fail to handle an exception, you halt.

For what it's worth, C++17 added [[nodiscard]] to address this issue.

> If you forget to handle a C++ exception you get a clean crash

So clean that there's no stack trace information to go with it, making the exception postmortem damn near useless.


You should compare exceptions to Result-style tagged unions in a language with exhaustiveness checks, like Rust. Not to return codes in C, lmao.

Everyone (except Go devs) knows that those are the worst. Exceptions are better, but still less reliable than Result.

https://home.expurple.me/posts/rust-solves-the-issues-with-e...


Rust is better here (by a lot), but you can still ignore the return value. It's just a warning to do so, and warnings are easily ignored / disabled. It also litters your code with branches, so not ideal for either I-cache or performance.

The ultimate ideal for rare errors is almost certainly some form of exception system, but I don't think any language has quite perfected it.


> you can still ignore the return value

Only when you don't need the Ok value from the Result (in other words, only when you have Result<(), E>). You can't get any other Ok(T) out of thin air in the Err case. You must handle (exclude) the Err case in order to unwrap the T and proceed with it.

> It also litters your code with branches, so not ideal for either I-cache or performance.

That's simply an implementation/ABI issue. See https://github.com/iex-rs/iex/

Language semantics-wise, Result and `?` are superior to automatically propagated exceptions.


> like Rust

Where people use things like anyhow.[0]

[0] https://docs.rs/anyhow/latest/anyhow/


Anyhow erases the type of the error, but still indicates the possibility of some error and forces you to handle it. Functionality-wise, it's very similar to `throws Exception` in Java. Read my post

As a matter of fact I did when it appeared on hn.

>forces you to handle it.

By writing `?`) And we get poor man's exceptions.


Poor man's checked exceptions. That's important. From the `?` you always see which functions can fail and cause an early return. You can confidently refactor and use local reasoning based on the function signature. The compiler catches your mistakes when you call a fallible function from a supposedly infallible function, and so on. Unchecked exceptions don't give you any of that. Java's checked exceptions get close and you can use `throws Exception` very similarly to `anyhow::Result`. But Java doesn't allow you to be generic over checked exceptions (as discussed in the post). This is a big hurdle that makes Result superior.

>Poor man's checked exceptions.

No, it's not quite the same. Checked exceptions force you to deal with them one way or another. When you use `?` and `anyhow` you just mark a call of fallible function as such (which is a plus, but the it's the only plus), and don't think even for a second about handling it.


Checked exceptions don't force you to catch them on every level. You can mark the caller as `throws Exception` just like you can mark the caller as returning `anyhow::Result`. There is no difference in this regard.

If anything, `?` is better for actual "handling". It's explicit and can be questioned in a code review, while checked exceptions auto-propagate quietly, you don't see where it happens and where a local `catch` would be more appropriate. See the "Can you guess" section of the post. It discusses this.


> But how is it different from other tools like doing it manually with photoshop?

Last I checked Photoshop doesn't have a "undress this person" button? "A person could do bad thing at a very low rate, so what's wrong with automating it so that bad things can be done millions of times faster?" Like seriously? Is that a real question?

But also I don't get what your argument is, anyway. A person doing it manually still typically runs into CSAM or revenge porn laws or other similar harassment issues. All of which should be leveraged directly at these AI tools, particularly those that lack even an attempt at safeguards.


RGB strip isn't really better, it's just what cleartype happens to understand. A lot of these OLED developments came from either TV or mobile, neither of which had legacy subpixel hinting to deal with. So the subpixel layouts were optimized for both manufacturing but also human perception. Humans do not perceive all colors equally, we are much more sensitive to green than blue for example. Since OLED is emissive, it needs to balance how bright the color emitted is with how sensitive human wet wear is to it.


> A lot of these OLED developments came from either TV or mobile

I remember getting one of the early Samsung OLED PenTile displays, and despite the display having a higher resolution on-paper than the display on the LCD phone I replaced it with, the fuzzy fringey text made it far less readable in practice. There were other issues with that phone so I was happy to resell it and go back to my previous one.


Pentile typical omits subpixels to achieve the resolution, so yes if you have an LCD and an AMOLED with the exact same resolution and the AMOLED is pentile, it won't be as sharp because it has literally fewer subpixels. But that's rapidly outpaced by modern pentile AMOLEDs just having a pixel density that vastly exceeds nearly any LCD anymore (at least on mobile).

There's RGB subpixel AMOLEDs as well (such as on the Nintendo Switch OLED) even though they aren't necessarily RGB strip. As in, just because it's not RGB strip doesn't mean it's pentile. There are other arrangements. Those other arrangements being for example the ones complained about on current desktop OLED monitors like the one in the article. It's not pentile causing problems since it's not pentile at all.


The article shows mac, it's not just ClearType...

PenTile for example (as another commenter pointed out) was woeful with text, and made things look fuzzy.

I'm not a fan of ClearType, but even on Linux OLED text rendering just isn't as good in my experience (at normal desktop monitor DPI)

Perhaps its down to the algorithms most OSes use instead of just ClearType, but why hasn't it been solved by this point even outside Windows?


iPhones all use PenTile and nobody complains about fuzzy text on them. Early generations of pentile weren't that great, but modern ones look fantastic at basically everything. See also everyone considers the iPad Pro to have probably the best display available at any price point - and it's not an RGB strip, either.


> and it's not an RGB strip, either.

The PPI difference matters though (and I think why my Nokia N9's PenTile OLED looked rough). Desktop displays simply aren't at the same PPI/resolution density, which is why they're moving to this new technology.

If it didn't matter, I highly doubt they'd spend the huge money to develop it.


I dunno, my phone's oled (oneplus 5T) looks perfectly fine even with small fonts...


The author is clearly aware of `error.Is` as they use it in the snippet they complain about. The problem is Go's errors are not exhaustive, the equivalent to ENOTDIR does not exist. So you can't `errors.Is` it. And while Stat does tell you what specific error type it'll be in the documentation, that error type also doesn't have the error code. Just more strings!

Is this a problem with Go the language or Go the standard library or Go the community as a whole? Hard to say. But if the standard library uses errors badly, it does provide rather compelling evidence that the language design around it wasn't that great.


> the equivalent to ENOTDIR does not exist

https://pkg.go.dev/syscall#ENOTDIR


When things like this (or Vello or piet-gpu or etc...) talk about "vector graphics on GPU" they are near exclusively talking only about essentially a full solve solution. A generic solution that handles fonts and svgs and arbitrarily complex paths with strokes and fills and the whole shebang.

These are great goals, but also largely inconsequential with nearly all UI designs. The majority of systems today (like skia) are hybrids. Things like simple shapes (eg, round rects) have analytical shaders on the GPU and complex paths (like fronts) are just done on the CPU once and cached on the GPU in a texture. It's a very robust, fast approach to the wholistic problem, at the cost of not being as "clean" of a solution like a pure GPU renderer would be.


> Why any technical person would want a PC that explicitly can't run Linux I'll never know.

Huh? https://www.phoronix.com/review/snapdragon-x1e-september


More recent revisit: https://www.phoronix.com/review/snapdragon-x-elite-linux-eoy...

TL;DR: It runs, but not well, and performance has regressed since the last published benchmark.


Tuxedo is a german company relabling Clevo Laptops so far, which work out-of-the-box pretty good (I might say perfect in some cases) on Linux. They have done ZILCH, NADA, absolute nothing for Linux, besides promoting it as a brand. So now they took a snapdragon laptop, installed linux and are disappointed by the performance....Great test, tremendous work! Asahi Linux showed if you put in the work you can have awesome performance.


Yes but having to reverse engineer an entire platform from scratch is a big ask, and even with asahi it's taken many years and isn't up to snuff. Not to say anything of the team, they're truly miracle workers considering what they've been given to work with.

But it's been the same story with ARM on windows now for at least a decade. The manufacturers just... do not give a single fuck. ARM is not comparable to x86 and will never be if ARM manufacturers continue to sabotage their own platform. It's not just Linux, either, these things are barely supported on Windows, run a fraction of the software, and don't run for very long. Ask anyone burned by ARM on windows attempts 1-100.


> if you put in the work you can have awesome performance.

Then why would I pay money for a Qualcomm device just for more suffering? Unless I personally like tinkering or I am contributing to an open source project specifically for this, there is no way I would purchase a Qualcomm PC.

Which is what the original comment is about.


The original comment was "explicitly can't run Linux" which is explicitly not true. Not "it's not fully baked" or "it's not good", but a categorically unambiguously false claim of "explicitly can't run Linux" as if it was somehow firmware banned from doing so.


If you want to split hairs, sure. It does not help anyone who is considering buying a laptop.


I'm open to being wrong.

If someone wants to provide a link to a Linux iso that works with the Snapdragon Plus laptops( these are cheaper, but the experimental Ubuntu ISO is only for the elites) I'll go buy a Snapdragon Plus laptop next month. This would be awesome if the support was there.


The Snapdragon Dev Kit is canceled. Snapdragon as a whole sure as hell isn't canceled, and Windows on Snapdragon isn't, either. There's loads of Windows laptops using Snapdragon with more continuing to release.


Saying it requires or uses AI seems to be just... modern marketing bullshit. But the technique itself seems sound enough. NASA studied the phenomenon already: https://ntrs.nasa.gov/api/citations/20160007348/downloads/20...


But if you actually, you know, read that NASA study, it mentions that the maximum practical speed (from theory) for “boomless” flights is less than Mach 1.3, and they only demonstrated “boomless” flights at Mach 1.1.

That would result in far, far less time savings that what is posited by the commentary on HN. Compared to Cessna Citation X, for example, that would reduce time in the air by just 15%.

Total travel time savings would be even less… so a private Citation X at M0.95 would still be beat a commercial M1.1 flight in door to door travel time.


Right but Mach 1.0-1.3 is all that "Boom Supersonic" is claiming to hit, though, so the paper is in line with the marketing pitch. The speed advantage of "up to Mach 1.3" might not be worthwhile, no, but that's orthogonal to the claims of "boomless" supersonic.

Now the article randomly pulls Mach 1.7 out of seemingly nowhere, and I have no idea where that came from or how that is justified. But the company isn't making that claim as far as I can tell ( https://boomsupersonic.com/boomless-cruise the "FAQ" section even specifically says: "Boomless Cruise is possible at speeds up to Mach 1.3, with typical speed between Mach 1.1 and 1.2.")


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: