Per-display DPI settings. No snooping on input without permission. Awareness of the lock screen (the compositor can know that the lock screen is active and provide alternate keybindings instead of having to configure the lock application as well). Locking is not blocked by context menus being open.
I ran XMonad for 15 years, but recently switched to river and am loving it.
fwiw, Xorg already had this, since you can set the DPI for each display through RandR/xrandr. In both X11 and Wayland it's up to the toolkit to actually detect the setting and rasterise accordingly.
Wayland actually went backwards in this respect by using "integer scales" (eg, 1, 2, 3) instead of fine-grained DPIs (eg, 96, 192, 288), so using a scale of 1.5 would result in downscale blur (toolkit sees scale as 2, then the compositor scales it down to 75%), whereas in Xorg you could just set the DPI to 144, and the toolkit could theoretically render at the correct resolution. As far as I know Qt was the only toolkit to actually do this automatically, but that's not X11's fault.
Wayland has at least since fixed this in the form of "fractional scaling" [1], but here's [0] an old thread on HN where I complained about it and provided screenshots of the resulting blur.
[1] Doing some quick searching it seems like this is still unsupported in Gtk3/Gtk4, maybe planned for Gtk5? Apparently Firefox has only just added support (December 2025), 3 years after the fractional scaling protocol was released. Seems ridiculous to me that Wayland failed to get this right from the start.
These days Xinerama is the only mainstream tool for dual head, but there used to be others. Nvidia Twinview was one. I bought my first dual head box in 1996 with two Matrox Millennium cards (although it mainly ran NT4) and those cards later went into my dual Athlon XP machine. That ran SUSE until Ubuntu came out.
Xinerama isn't a sine qua non. It's just easy so it became ubiquitous. Maybe it's time to replace it.
It's the same on Wayland. The client (usually part of a toolkit like Gtk/Qt) needs to subscribe to notifications [0] from the server so it can decide the raster size of the surface it wants to draw to. Qt does this on X11 by detecting when your window moves to a screen with another DPI and resizing/rescaling.
I guess the "third" program would be something like xrandr, so the Wayland analogue to that would be wlr-randr (for wlroots compositors), or some other DE-specific tool for configuring screen sizes. Again there's no fundamental difference here.
You can do per-display DPI just fine on X11 (through xrandr), it's just the major toolkits don't support it. GTK, for example, reads a single global DPI value from XSETTINGS; there's no reason why it has to be that way.
The annoying thing about the other things you mention is that they honestly are not that difficult to fix.
The X server can throw an error (or just silently ignore it) when one client passes the window of another client and button/key events in the mask to XSelectInput(). And the Xinput2 bits that allow for receiving all key and button events can be changed to only send events destined for windows in the same client. There: input snooping is fixed.
Lock screen awareness can be fixed with new requests/events in the MIT-SCREEN-SCREENSAVER extension (or, if that's fraught, a new extension) that allow an app to create a "special" lock-screen window, which the X server will always stack on top, and send all events to. (That new functionality should probably allow for child windows and popups for input methods as well.) This is honestly not hard!
And yes, some applications will break when you do this. But I cannot see how that's not significantly better than creating an entirely new display protocol that everyone has to port to.
There are other issues with X11, of course, mainly in the graphics pipeline (e.g. the compositor should really be in the X server), but it's hard to believe these things couldn't be fixed. It feels like no one really wanted to do that: building something new from scratch that (in theory) didn't have all of the mistakes of X11 would be more fun, and more rewarding. And I get that, I really do. But Wayland has created so much work, so many thousands (tens of thousands? hundreds of thousands? million+?) of developer-hours of work for people that maybe could have been better spent.
So I think Phoenix is a great idea. It's basically "X12"[0]: removing the old cruft and making breaking changes to fix otherwise-unfixable problems. I imagine most modern, toolkit-using X11 applications would work just fine with it, without modification. Some -- perhaps many -- won't... but that's ok. Run a nested, rootless X11 server inside "X12" if they can't be fixed, or until they're fixed.
[0] Yes, I know that an X12-type thing was considered and rejected (https://www.x.org/wiki/Development/X12/), but I still think it's a better idea, after a decade and a half of Wayland still not being able to support everything we need to port Xfce's components and maintain all of their features.
>You can do per-display DPI just fine on X11 (through xrandr), it's just the major toolkits don't support it. GTK, for example, reads a single global DPI value from XSETTINGS; there's no reason why it has to be that way.
I remember people complaining about the GTK file picker not having a preview for more than a decade, and at some point it sort of became a meme.
When it finally got added, the PR was like a 2-300 lines.
And was added after they rewrote everything for the new GTK version when there're functional patches adding thumbnails to previous versions. (Which were rejected/ignored because they didn't feel good.) A situational very in parallel to Xorg/Wayland if consider: https://news.ycombinator.com/item?id=46382940.
> It feels like no one really wanted to do that: building something new from scratch that (in theory) didn't have all of the mistakes of X11 would be more fun, and more rewarding.
My understanding from the outside is that this didn't happen, that Wayland is a spec without a reference implementation - that they didn't actually build anything and are leaving the difficult part up to everyone else.
They do have a reference implementation: weston and libweston but as far as I know, third parties don't use. They implement all their own functionality. Weston is confined more as a prototype.
If the issues are trivially resolved, why did the authors of X decided to abandon X? If the issues could be resolved, why were they not resolved?
I am using wayland for more than 5 years now, it just works. X did not. Xscreensaver/lock screens on Qubes are still broken.
What features is Wayland the protocol missing to allow supporting Xfce?
Even when you are national-state-level target, there are easier ways to grab the screen.
For local state, it's easier to just install a wireless camera and watch your screen from behind: it leaves no trace on your computer (you may spot it wireless connection, if you lucky). Moreover, they are more interested in your communication devices (your smartphone) than in your desktop.
Foreign states may exploit your notebook builtin "anti-theft" system, Intel Management Engine ("intel" is very good name for a CPU ;-), bugs in NVidia firmware (fonts, OpenGL, etc), bugs in hardware (create a second display to mirror image from primary display to, even when physical display is not attached, for example), etc.
However, I saw that my Firefox window was spied by Chromium window few years ago (I recorded it on Youtube), so this problem in X11 is real.
I am not sure what you saw, but on regular Linux processes of the user can spy on each other anyway. In any case, X had the concept of untrusted clients basically forever but nobody cared to invest even the small amount of work necessary to make it work well because nobody thought it would make a different. That this was later used as a major argument against X convinced me that this is not at all about technology.
Yeah, but with how we’re moving towards running each (desktop) application in its own cgroup, thus restricting what syscalls any given application can do, soon any old user process will no longer be able to read any other process’s memory. I don’t believe that the argument about how we need not patch a hole because another one exists right besides it is sound.
> I don’t believe that the argument about how we need not patch a hole because another one exists right besides it is sound.
It is when you are essentially putting bars in front of your windows while leaving the front door unlocked, i.e. you are making things worse in the name of security while not actually providing any additional security.
> Yeah, but with how we’re moving towards running each (desktop) application in its own cgroup, thus restricting what syscalls any given application can do
Who is we? I don't want or need any of that on my free software system.
I agree. My point was only that this hole can easily be patched in X as well. So the argument was essentially "we do not bother to patch it with X, so we must rewrite X".
I care about being able to use the same password between the display manager, tty and lock screen auth. Yet, I cannot.
I think the original maintainers and developers of Xorg would be the best people to choose if it is worthwhile to continue working around X or do something else. Yes, X provided functionality that now WMs get to implement themselves - since the developers of Xorg worked closer to Gnome and Qt people, and Gnome and Qt people were OK with this, this didn’t feel like a horrible trade off. And given the diversity of Wayland window managers today, I don’t think it mattered all too much.
What? My screensaver password is the same as my login.
> I think the original maintainers and developers of Xorg would be the best people to choose if it is worthwhile to continue working around X or do something else.
"I think the owners of the Internet infrastructure would be the best people to choose what websites I'm allowed to visit"
No, the users have spoken and continue to speak up that Wayland doesn't serve their use cases.
> What? My screensaver password is the same as my login.
It is the same, yet some uppercase characters are not supported when entered via a yubikey. This has been marked as a WONTFIX. This is rather sad, because I can enter the same password in a TTY with no issues.
Kristian Høgsberg, for example, was a Red Hat employee. Then he worked at Intel, where it appears he continued work on Wayland? So Red Hat and Intel at least? People are being paid full-time to work on Wayland, so those companies.
By now I am not sure if these posts can stil be given the benefit of the doubt or are just dishonest. Who were the developers pushing wayland because of their employers? Kristian Høgsberg (who was a significant xorg developer, because people always deny that wayland was written by xorg guys) originally developed wayland in his free time, it then became a freedesktop project (I would argue not a group run by corporates).
The most active implementation (particularly in the early days) is probably wlroots, started by Drew deVault (again in his free time), who is often quite vocal against corporate control.
In fact the large desktop environments, which are much more under "corporate control", were comparitavely slow to adapt wayland IIRC.
So instead of repeating this accusation, maybe actually give some evidence?
I didn't think my explanation implied how you interpreted it.
I thought everybody knew Wayland was started by some people working on Xorg already; I did not mean to imply otherwise. Many or all were paid for their work. They believed Wayland was a better approach, and, AFAIK, at some point switched to be paid full-time to work on Wayland instead of X. Which, sounds a lot like they convinced their employer (or a new employer) to pay them to work on Wayland instead of X. Do you believe this is a fair summary of the situation?
> I didn't think my explanation implied how you interpreted it.
>
> I thought everybody knew Wayland was started by some people working on Xorg already; I did not mean to imply otherwise. Many or all were paid for their work. They believed Wayland was a better approach, and, AFAIK, at some point switched to be paid full-time to work on Wayland instead of X. Which, sounds a lot like they convinced their employer (or a new employer) to pay them to work on Wayland instead of X. Do you believe this is a fair summary of the situation?
Sorry for my combatitive before. I definitely interpreted your previous post differently and I think your clarification is a fairer assessment of the situation. I would still argue that the majority of people implementing the wayland protocol are not paid by their employers to do so (this might now have changed a bit with smithay, which is sponsored by system76 I believe).
Look into river. It has the window management and keybindings able to be offered by other tools (I have an idea to implement one using XMonad's layouts).
It also vastly improved battery on my Dell Pro laptop. 58% battery used in 7h45m (light compilation day, but no suspend).
That sounds cool, but TBQH the last thing I want to do is make myself dependent on some obscure piece of tech I've only heard of once before (just now.) My plan is to keep running X as long as I can manage to make it run. If river finds traction and is well known to me in 10 years then I'll consider it then.
This is one of my big problems with Wayland; the fragmentation of Wayland imposes an unacceptable cost to picking the wrong DE, whereas with X all my tools for X still work regardless of my DE.
As someone who probably has a similar setup on Linux to the author: why do you have 10 windows for an app open?
For me, grouping by app is terrible. Yes, they may all be "Terminal" or "Firefox" windows, but they are for very different things. I'd rather see things grouped by project regardless of "app". But that is what tagging window managers are for :) .
Given that macOS forces that (IMO) braindead tunnel vision paradigm, I think the response should be "Wù".
For example, because the app restores its state and you have a few "projects" within the app.
> I'd rather see things grouped by project
Ok, and what if that project is encapsulated in an app window? Why introduce an extra level of indirection for no reason and spend time configuring it? If you frequently need a set of "5 firefox tabs, 2 terminal tabs, 4 text editor tabs, each in a separate app window", sure, spend time tagging it, set it as a WM project and launch/activate it with a key, but not everything is like that.
If you have projects that fit within an app, sure. I do not. The only "apps" seen on my machines are terminal windows hosting tmux processes and Firefox. Everything else is ephemeral (mpv, dialogs, rofi, dunst). App-centric behavior is just the wrong axis for this setup.
I'm saying that given what details are there, I think the author is closer to "my" end of the spectrum than one where the question makes sense at all.
Ok, how does that address my initial question (which was not about you) then? Not everyone's setup is so primitive as to only be centered around two apps
Though in this case I don't get what is so terrible about app groups if you don't group anything else anyway since it's ephemereal, so wouldn't any grouping work (except for 2 apps)?
The Sapir-Whorf is strong in this thread. MacOS' app-centric model makes it hard to even imagine other people's workflows. Stop thinking in apps, think about a task. I have multiple tasks (workspaces). Each task has multiple aspects (windows). Apps are a distraction, an accidental complexity. I want to switch between tasks and then subparts of those tasks.
I know I am weird, but I detest using a MacBook trackpad. However, recently having used Asahi on one, I've found that it is the Apple software that makes it so. I find it really difficult to drag and drop (I would rather open Terminal and standard Unix tools than try anymore) and gestures are way too greedy IMO. Under Linux it is bearable for me (though I still have preferred others slightly for a better texture than the glassy feel).
I wonder if the author is like me in that respect? Not sure I would spend time like this, but I also spent months building my Linux environment from a tty in 2009-2010 (landed on XMonad, finally on River this year after 5 months in GNOME purgatory to force myself to move to Wayland). Last macOS machine I set up, I turned off a bunch of stuff in Settings and was instantly bored because I just didn't want to deal with the window manager at all. It is now my video chat machine because of Dell's "wise" decision to use IPU7 hardware…but I really don't like using it for much else (Asahi reboots are tedious).
They are. If you rewrite history, you get a different hash. You can do some shenanigans with git-replace, but those are usually for stitching history across gaps (like hooking pre-publish history to public release for internal archaeology at least.
What you actually want is a ban on rewriting tags or accepting branch updates to commits that do not have the current commit as an ancestor. These hooks do exist, but are not on by default (beyond the toilet paper protection of needing --force).
You also have to ban deleting branches because otherwise you just delete and push a new one with the same name. Maybe we should store topic branches in repos under refs/topics/ to distinguish integration branches from development/review branches?
That works great for those that only use those tools. I, at least, do not and would appreciate something that doesn't care what editor I use or what forge happens to host the project.
This is truer now that `git bisect --first-parent` exists. But it didn't always. And even then, there are times you find out that there is "prep work" to land your feature. And a PR just to do some deck chair moving that makes a follow-up commit easier is kind of useless. I have done prep work as a separate PR, but this is usually when it is more extensive than the feature and it is worthwhile on its own.
Another instance is a build system rewrite. There was a (short) story of the new system itself and then a commit per module on top of that. It landed as 300+ commits in a single PR. And it got rebased 2-3 times a week to try and keep up as more bits were migrated (and new infra added for things other bits needed). Partial landing would have been useless and "rewrite the build system" would have been utter hell for both me developing and anyone that tries to blame across it if it hadn't been split up at least that much.
Basically, as with many things in software development, there are no black-and-white answers here.
While I also find git-annex more elegant, its cross-platform story is weaker. Note that LFS was originally a collaboration between GitHub and Bitbucket (maybe? Some forge vendor I think). One had the implementation and the other had the name. They met at a Git conference and we have what we have today. My main gripes these days are the woefully inadequate limits GitHub has in place for larger projects. Coupled with the "must have all objects locally to satisfy an arbitrary push", any decently sized developer community will blow the limit fairly quickly.
I also find editing on an iPhone to be an exercise in futility. Is it no longer possible to place a cursor in the middle of a word? I end up having to go to a word boundary and erase from there and retype everything.
The keyboard touch areas also seem offset from Android and I end up one row off too much of the time.
Yes, the UI is so overloaded you can never tell what it's going to do. It might do two or three totally different things. Obviously you want to have the magnifying glass with a cursor. But then the cursor might just decide to jump to the end of the word. Sometimes it's impossible to get the cursor in front of the first letter if the UI is cramped. Maybe it will copy the text into a floating clipboard if your finger drifts a few pixels south. Maybe it will bring up a context menu? If you're using Safari, maybe it won't even let you select any text at all. Then you can take a screenshot and select text from an image to work around that.
Yes but sometimes it doesn't work, weirdly. The cursor just doesn't go where you put it, jumping to the end of the line or next line entirely, where it gets lost in limbo because it's a single line text box. It's ridiculously broken sometimes.
Now that's not a big deal until it happens 3 times in a row randomly and now something that would take less than half a second on a keyboard is taking over 20 seconds. Not only that the random behavior is extremely frustrating which just makes you avoid it in the future.
I use that on Android all the time. But I feel I've only gotten it to work once or twice on iPhone. And even then the word boundaries were very "sticky" (IIRC) and precision placement still very difficult.
I don't know! I remember where I was standing when i realized you could do it, by accident! so i know it's been there since at least 2018, because the building whose smoking area i was standing in got knocked down the next year ^^
I do more writing on my iPhone (it's the one with the largest screen) than I do on a computer. I can do about 40wpm. To move the cursor you just hold down on the space bar. These complaints kind of sound like someone from the 90's saying that the close window button is on the wrong side
Just yesterday I had to edit something on an iPhone. I finally managed to put the cursor at the front to add a word before what was there already. But when I started typing, auto correct (or whatever) put the cursor back at the end of the word. I ended up just removing everything and typing from scratch because figuring out Apple logic behind such a behavior just wasn't on the agenda.
40wpm is 33% less than what a bad typist can do. Repeating "just hold down the space bar" doesn't make it behave any less erratically. We had Palm Pilots in the 90s and they ran on AAA batteries and editing text on them was certainly more consistent than the current state of iOS.
It's not "just", because you have to switch from the more natural "tap where you want to edit" to a separate gesture, which also takes longer and is less precise. You might also use a different keyboard with better layout/symbol visibility that doesn't support this gesture
Coming from android I have to agree, it's terrible. The only help I can offer is that if you press and hold the space bar you can drag to go through to where you need to be, but it's still painful. I can only bear ios because I am using SwiftKey - the default keyboard genuinely stopped me from switching to an iPhone, I found it that bad. And some apps force you to use the default ios one which is even worse!
One big benefit of symlinks (really, "not storing it in the deployment") is that my Git repo doesn't have a bunch of hidden files in it because they can appear in the link's path rather than the repo's path. I can also split up files based on "why it exists" rather than "where it lives". For example, I can have the "enable Rust support" group of configurations:
- add Rust things to the list of packages to install
- add any Rust-specific configurations for Neovim and `zsh`
- bring in any Rust-oriented Neovim plugins
These files all then live next to each other rather than being scatter-shot between Neovim, zsh, and some giant list of packages. Additionally, if I later decide to disable "Rust support" on a machine, the broken symlinks in `$HOME` let me clean up easily without having to actually excise things from the repository.
That said, I have my own system that I built up years ago and it's never been abstracted out for anyone else to use so of course it's going to fit my needs better than anything else.
I ran XMonad for 15 years, but recently switched to river and am loving it.