Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If visual perfection is nothing to you, if you love screen tearing, slow window redraw when moving, Windows 95-esque window trails, etc. — keep using Xorg.

Wayland is a protocol fully designed around composition. Clients have their own buffers, the server composites them. There is no way to draw directly to the screen, because we're not in the 90s with 640K of RAM and there's no reason whatsoever to implement the crappy way of rendering windows.



Forced composition and vsync is a mistake for gaming.

The added output latency is unacceptable, especially for first-person shooters. A little tearing is nothing compared to the vsync lag.

If Wayland wants to replace X.org, then it should support also this use case. But full composition being mandatory isn't very encouraging in regard to this.


Untrue in my experience.

Games still can render more FPS than mandated by vsync.

I tried playing sauerbraten (with SDL2) on sway other day, it was butter smooth (no tearing) with vsync off in game, and I felt no input lag unlike when I switch vsync flag on in game which limits FPS.

It probably does triple buffering, but somehow on sway it worked better than triple buffering of intel's xorg driver back when I tried that.


VSync fundamentally introduces a frame+ of delay by design, it does not matter what your FPS is.


In wlroots, when the only thing on your screen is a fullscreen app (i.e. a game), direct scanout is used instead of composition.


Composition forces latency tied to the refresh rate of the composited output and is the only reason i do not like it and disable it where possible. For me this latency makes Wayland imperfect.

I do not love screen tearing, i just do not mind it at all unless i am watching a movie (where i can enable vsync in the player).

Slow window redraw when moving is something i haven't seen since i had a 386. Even my Pentium could blit windows around.

Windows 95-esque window trails are only a thing if the process associated with the window is stuck. Note, btw, that there is nothing that forbids the X server to cache such windows if it detects that the client doesn't respond to messages after a while - which btw is what Windows does since XP. It is just that nobody implemented it.

> Wayland is a protocol fully designed around composition.

Which was a mistake.

> Clients have their own buffers

Which was a mistake.

> the server composites them

At some other point after the client has marked its window as being updated, meaning that you have around two frames of latency (first frame is your input being sent to the application while the application is drawing itself, meaning that the input will be processed later so the response to your input is a frame late and second frame is the application notifying the window server that the window is outdated while the window server is drawing the output, meaning that the new contents will be used in the next frame).

> There is no way to draw directly to the screen

Which was a mistake.

> because we're not in the 90s with 640K of RAM

If 640KB of RAM didn't limit being able to have direct access to the screen and fast response times, 640GB of RAM shouldn't either. The new design is just misguided garbage that has nothing to do with available resources and everything to do with developers not giving two thoughts about uses outside of their own (note: X11 allows you to have both composited and non-composited output, so people who like composition can use it as can people who dislike it, Wayland forces composited output so people who dislike composition cannot use it).

> there's no reason whatsoever to implement the crappy way of rendering windows

Yes, wasting resources with every application having to maintain their own buffer for each window's contents even though those contents will not change for the vast majority of the window's lifetime for most windows is crappy. Though that is only a minor reason for why Wayland sucks.


> At some other point after the client has marked its window as being updated, meaning that you have around two frames of latency (first frame is your input being sent to the application while the application is drawing itself, meaning that the input will be processed later so the response to your input is a frame late and second frame is the application notifying the window server that the window is outdated while the window server is drawing the output, meaning that the new contents will be used in the next frame).

Not every action takes an entire frame. The timeline can easily be like this:

  0ms: old frame starts to output
  4ms: input happens, application wakes up
  9ms: application finishes rendering
  15ms: compositing happens
  16ms: new frame starts to output
There's nothing about compositing that requires it to add any significant lag on top of rendering time plus transmit time. If input shows up while you're already drawing? That could happen without compositing. Just draw again or do whatever you'd do without compositing.

> Yes, wasting resources with every application having to maintain their own buffer for each window's contents even though those contents will not change for the vast majority of the window's lifetime for most windows is crappy.

Why waste time redrawing it if it's not changing? And a full-screen window on 1080p is only using 6MB for that buffer. Compared to the processes I'm running, the frame buffers are quite lightweight.


I thought freesync support in modern GPUs meant that they weren't restricted to a fixed refresh rate anymore, and that pixels would then just be "composited" and appear on screen as quickly as possible regardless of their on-screen positioning. Then by using a single fixed buffer that covers the whole screen, this essentially gives you the equivalent to "compositing being disabled".


The only way to avoid the composition latency is to have the entire composition be done when the GPU is sending the final image to the monitor (note that performing composition using the GPU via OpenGL or whatever is not the same thing even if both are done using the GPU), pretty much like what "hardware mouse cursor" gives you. This would require GPUs to support an arbitrary number of transformable overlapping surfaces (where a single surface=a single toplevel window) and applications being able to draw to these surfaces directly without any form of intermediate buffering (which is important to avoid the initial late frame latency).

Now, it isn't like it is impossible to make a GPU with this sort of functionality since GPUs already do some form of composition already, but AFAIK there isn't any GPU currently on the market that can do all the above. At best you get a few hardcoded planes so you can implement overlays for fullscreen content.

And of course none of the above mean that you have to use Wayland, the X server could perform toplevel window composition itself just fine.


Some GPUs do support multiple layers which are composited together to form the final image.

This is one reason why the FBDev driver is deprecated: it only supports one framebuffer per output.


Who even moves windows around on a desktop? Surely everyone is using tiling window managers these days, instead of relying on that sort of nonsense? No? Oh well, I'll keep hoping for tiling to become mainstream.


Can you make xterm windows exactly 80 characters wide with a tiling window manager? How is it going to even tile that way?

Or when making say an icon you want to see it 1:1 size while working in zoomed in size, how tiling solves that?

If all of this is problematic, I don't see how tiling window managers can get into mainstream.


> if you love screen tearing, slow window redraw when moving

Really? Do you even experience them these days? The hardware nowadays is powerful enough to make them neglectable.


This "let's make things slower, heavier, and more brittle in exchange of this small improvement" mantra is the scourge of software today.


I personally don't consider it just a small improvement. Tearing drives me crazy and I'm comfortable sacrificing a frame to make sure it never happens.

However, I will admit that Crinus makes some compelling points.


I should note (i already mention it in my other post but it is in a little bit in a sea of words) that disliking tearing and wanting to run a desktop environment free of it is perfectly fine and X11 does allow for that - there is nothing in it (considering extensions too) that prevents such use and if there are bugs then these are bugs that can be fixed.

My issue with Wayland when it comes to composition is that it forces it whereas in contrast X11 just enables it - but doesn't force it.

It is the good old "having options" argument which, for some reason, always comes up when GNOME and Red Hat projects are involved.

(just as a note, composition isn't my only issue with Wayland, i have other issues with it being short-sighted and needlessly limited, but these are outside the current discussion)


The wording of the parent comment - specifically "Wayland is a protocol fully designed around composition [...] There is no way to draw directly to the screen" - made me think that this is a case originally of idealism, that when seen to be naive, presses on into diseased dogmatism. Some people decide that for example 'composition' is the be all and end all, and all other aspects are secondary to it, or less. Therefore if useful functionality is dropped, so be it.


"if useful functionality is dropped, so be it." Seems to be the mantra of wayland.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: