What irritates me with regard to Wayland is the assumption that all rendering is local. The great thing about X was on a LAN you could bang up an ssh session somewhere, carry across your $DISPLAY and have the exact same application running somewhere else but rendered to your screen. It doesn't sound like much but there were times when it was incredibly useful.
Thin terminals brought this to the logical conclusion of not needing a desktop at all, but unfortunately X's protocol is too chatty and sensitive to latency to really shine across a WAN. These days you can work around all of that by opening a session via Guacamole or the like but that's still one tab in a browser per graphical session. There's no just using SSH in a for loop and opening up a raft of xterms to a bunch of machines in Wayland, AFAIK.
From the Wayland FAQ
Is Wayland network transparent / does it support remote rendering?
No, that is outside the scope of Wayland. To support remote rendering you need to define a rendering API, which is something I've been very careful to avoid doing. The reason Wayland is so simple and feasible at all is that I'm sidestepping this big task and pushing it to the clients. It's an interesting challenge, a very big task and it's hard to get right, but essentially orthogonal to what Wayland tries to achieve.
To be honest, I think remote rendering is unnecessary and is mismatched for actualy application workloads. The amount of data and IO needed by the renderer in many applications outstrips that needed to send the rendered result to a display. I.e. the textures, models, or font curves loaded into GPU-local RAM and then resampled or reinterpreted to draw frames. This stuff is the heart of a graphical application, and doesn't really make sense to split across a network.
When the application host is really resource-starved and does not need to supply much content to a renderer, we are better off using more abstract network interfaces. On one hand, HTML+CSS+js are the remote rendering protocol to the browser which becomes the remote renderer. On the other hand, sensor/data interfaces become the remote data source for a GUI application that can be hosted somewhere more appropriate than on the resource-starved embedded device.
What I will miss from SSH X-forwarding is not remote rendering. It is per-window remote display. I would be perfectly happy with remote applications doing their rendering within their own host, and just sending their window contents to my display server, where my local compositor puts the window content onto my screen. Protocols more like VNC or RDP could be used for each of these windows, adaptively switching between damage-based blitting, lossless and lossy image codecs, or even streaming video codecs. What is needed is the remote window-manager and display protocol to securely negotiate and multiplex multiple windows opening and closing rather than one whole desktop.
What I won't miss from SSH X-forwarding is window and application lifecycle tied to the network connection. Let me disconnect and reconnect to my remote application and its windows. Let me share such a connection to have the same windows displayed on my desktop and my laptop, or to window-share with a coworker down the hall...
That's pretty typical. "I don't need it, so therefore nobody should need it, so therefore I will remove it".
Unix is different things to different people and to remove the network capability from X is a step back, not a step forward. There are a whole pile of neat things that you can do with networked graphics, so much, in fact that Plan 9, in many ways a successor of Unix used that theme as the core of many of the system services. All of them work locally as well as remote.
I am suggesting that people need to distinguish remote rendering and remote display. I do not advocate removing the network capability. In fact, I personally have avoided using Wayland for this very reason. But, I am also not volunteering to maintain Xorg myself, so I try to understand where things are going.
Hardly any of the use cases I have heard for networked graphics actually require or even care where the rendering occurs. They are just concerned with application software installed and running in one place causing pixels to appear for a user elsewhere. These could work just fine with remote display streams, with all rendering happening on the same host where the application executes.
I ponder what it is that I really dislike about VNC or RDP, and I find it is the nested desktop and lack of integrated window management. I think I'd be fine with some kind of "rootless" variant to let me SSH-forward a set of windows to a Wayland compositor. I would prefer it to have more flavor of tmux/screen to let me connect and disconnect, and easily define named sessions or groups of windows that I would want to grab together.
The thing is, you don't need explicit support in the display server for remote. RDP works just fine on Windows, yes even single-window forwarding, without it.
I thought with the later versions of RDP it somewhat works like remote X, where can pass through a lot of the context and the rendering is done on the clients GPU instead of being rasterized and sent over the network? I've found RDP very snappy in later versions, with very crisp rendering even over relatively low bandwidth connections (order of magnitudes better than VNC).
There's also remoteapp which allows you to just run one program remotely, but haven't got much experience with this.
RDP got a lot faster due to tapping in at different levels in the rendering engine, but those are mostly optimizations and not how the whole thing is meant to be viewed, it is a 'remote display protocol' that transports what is already visible on some remote screen or memory buffer to another computer for viewing. Shortcuts and optimizations don't really change that, whereas 'X' is at heart a client-server solution.
What does an optimization have to with anything? So what if X does not have a native GPGPU driver, that's really more a function of hardware manufacturers support than anything else. And if you are not too picky about all your stuff being 'open' then NVidia's X driver and CUDA on linux co-exist just fine with accelerated graphics and GPGPU support.
In a nutshell: RDP is a good way to access remote systems that are running a window manager of sorts, X is a good way to have remote clients connect to a local X server (the display).
RDP is more closely related to VNC and something like 'screen' than X, which is more like a networked resource that you can access through remote clients.
RDP is an application sharing protocol, X is a screen sharing protocol, the two are radically different from each other which gives each advantages and disadvantages that the other lacks and a bunch of overlap. Most notably, with X security was bolted on as an afterthought. RDP, originally reading the screen contents and sending those over (compressed) does not allow for much interaction with what gets sent over, it is rather low level whereas X sends over display primitives. Again, X's initial - rather naive - implementation made a ton of assumptions because the original implementors had nice fat workstations and fast networks to work with and so the real-world utility of these features was rather limited.
Anyway, you could write books about this and not get all the details, besides the various implementations of both but all that there is to this is that due to their history and intended application the two are very different beasts and X offers a versatility that not a whole lot of people need but when you need that versatility you need it badly.
X11 did that. It has clients, doing their client thing, and a server, managing the display details.
Yes, this means that unlike many C-S architectures, the server is the part nearest the user and the client (may be) further away. But each did its respective job and stuck to its knitting.
Keep in mind that when X11 was first created, and for much of its early life, through the 1990s, it was used for remotely accessing another (or multiple other) systems. The networking capability was baked-in. Nevermind that security was bolted on later, here, have a MAGIC_COOKIE....
> The Unix philosophy: do one thing, and do it well
You're trying to argue that there should be two distinct pieces of software, a local and a remote renderer, for simplicity.
Rendering does not fit in that space well. The physical resource of a display and graphics card really only allows one renderer to run, so it's not really useful to say "just write two renderers, one local and one remote".
The unix philosophy is not a good argument. It's a principal by which you can make decisions, but it doesn't always lead to good decisions and it's quite open to interpretation as well.
I think remote rendering is unnecessary and is mismatched for actualy application workloads
Nowadays, perhaps, but that's software bloat in action, software getting slower faster than computers gaining in speed. In the last millennium if one had a login one could connect to the computer in Daresbury, UK that had a copy of the Cambridge Structural Database and search crystal structures. Many people at British universities did that. It would render graphics on an X server. Once I tried it on holiday at the East Coast of the US, and it wasn't a painful experience at all. Then again, it was the old version that still had PLUTO, which was a program from the computing stone ages.
This is true but VNC doesn't exactly win any awards in the performance department in general. Sending bitmap diffs is pretty much the least efficient most naive way to do remote desktop. What VNC has going for it is that it's dead simple to implement because of that fact.
X forwarding goes the complete opposite direction to pipes the whole X protocol over the network. Convenient -- yes, also dead simple to implement -- yes, actually works -- well no. You see the X ecosystem also assumes all rendering is local, that fact is just hidden in edge cases. For example Nvidia graphics work on X with an extension that both has a client and server component -- as soon as you forward X traffic that's looking for the server-side extension... crash.
RDP by contrast does the right thing(tm) but also the hard thing by defining an intermediate object model which is semantic enough to kill the need to send pixels over the wire most of the time but flexible enough to allow for it when necessary.
RDP something like RDP is the correct answer - I thought it worked by effectively streaming GDI calls over the network - well that and whatever they used to replace GDI for 3D enhancements.
I'd agree with you for modern applications that are not well written, I've been running X applications over the network however for 20 years or so, and outside of things like.. well 3D applications, network X works pretty well - my only real complaint was not including sound too.
NCD (Network Computing Devices), who made X Terminals, came up with a sound server called "NetAudio", based on the X11 server, protocol and client library, with the graphics ripped out and sound stuck in.
I used it in the early 90's to support the Unix/X11/TCL/Tk multi player version of SimCity, with a scriptable TCL/Tk audio server that could drive either /dev/audio on Sun/SGI/etc or NetAudio on NCD X Terminals. Other TCL/Tk clients like SimCity (but possibly others) communicated with it via the TCL/Tk "send" command, which bounced messages off the X server.
NetAudio had its problems, but basically worked. Except that it insanely mixed audio by AVERAGING (add waveforms then divide by the number of sounds) instead of adding and clipping, and I couldn't convince the NCD engineer that was the wrong way to mix sound, and to just add and clip instead, like sounds mix in the real world: When you talk over music, the music volume and the volume of your voice doesn't magically lower by half. Hopefully they've fixed that problem by now.
I’m fine with that tradeoff. For people who need network rendering, they can pay a performance penalty when they use it. For the 99% of people who 99% of the time don’t use this feature, we aren’t paying the performance (and architectural complexity) cost of this.
Electron is panned by many these days for requiring an expensive network system for a local process. Why doesn’t X11 get the same scorn? Have we forgotten the days when a Mac could run a GUI in 128K while a Unix system struggled to run x11 (with no clients) in 4MB? X is heavy.
In the late 80's, context switching was so slow and local area networking was fast enough that it was sometimes actually snappier to run your xterm or emacs on another system over the local network, than both on the same workstation.
That's because running over the network didn't require ping-pong context switching back and forth between the X11 server and the client at every keystroke, so both client and server could run smoothly without getting switched out.
The X protocol is so chatty and ping-pongy that it could require several context switches per keystroke to handle the event and update the screen, when running both server and client locally!
> Electron is panned by many these days for requiring an expensive network system for a local process. Why doesn’t X11 get the same scorn? Have we forgotten the days when a Mac could run a GUI in 128K while a Unix system struggled to run x11 (with no clients) in 4MB? X is heavy.
If you actually properly understood both the argument you're making and the reason why most people dislike Electron and friends, then you'd not be making that argument. I'll give you the benefit of the doubt and assume you're joking.
Actually, Electron (and most other web browsers) on the Mac OS/X and iOS use IOSurface to share zero-copy textures in GPU memory between the render and browser processes. Android and Windows (I presume, but don't know name of the API, probably part of DirectX) have similar techniques.
It's like shared memory, but for texture memory in the GPU between separate heavy weight processes. Since simply sharing main memory between processes wouldn't be nearly as efficient, requiring frequent uploading and downloading textures to and from the GPU.
Because when attacking your parent poster, you neglected to explain your argument, and the reasons why you believe most people dislike Electron, and why X11 doesn't get the same scorn, which caused you to be downvoted.
Care to elaborate, please?
Do you know if X11, like Electron, supports shared memory textures in the GPU like IOSurface and GL_TEXTURE_EXTERNAL_OES are for?
> Because when attacking your parent poster, you neglected to explain your argument
I can't force someone to understand something that they do not want to. More than enough people have written, much more eloquently than I, about why electron is bad. None of the critiques I've seen are 'because it relies on an expensive network system'.
EDIT: Hell, even if we assume that the critiques are about "expensive networked system"s, Electron is still incomparable to Xorg so the root-parent's post is making a false comparison.
X performs better than VNC for stuff like xscreensaver, but not usually for video or applications that do dumb stuff like cute animations of internal windows opening. VNC just drops out the frames and catches up. X tends to render to perfection each graphic context call.
Well, I haven't paid much attention to these gaming products, but this is a really clear example of the difference between remote display and remote rendering.
This Google Stadia is using a remote display protocol. All the rendering is happening in the machine in the datacenter, where the CPU and GPU lives and the game executes. It's a bit like someone sitting in the datacenter playing the game while you live-stream it at home, except they also pass through input controls so you can steer the game from home instead of just watching it.
A remote rendering protocol, which is no longer really sensible, would try to keep the GPU in your living room while running the game in the datacenter. Instead of streaming the screen updates at a nice steady pace based on your display resolution, it would have to stream the drawing commands from the datacenter to your GPU, adjusting the geometry as the game simulation updates the world model and stalling with nice loading screens every time it needs to shuffle in different texture and model data from the game computer to your GPU via the internet.
If X was maintainable then this would be an extremely reasonable complaint. However, the reason that Wayland is about to get rammed down our throats is that the only developers willing to work on X.Org are those who are paid to do so and they are all working as hard as they can to depreciate it for something that fits their needs.
The idea that a desktop compositor should be network aware by default is not really a defensible design decision. Why should the compositor be responsible for sending a compressed stream of OpenGl instructions over a network? This is an activity that is Someone Else's responsibility; making a graphics developer responsible for decisions about latency and compression is just going to reduce the number of people who can understand the codebase.
The reason for X was that in 2008 the driver manufacturers wouldn't support anything else. AMD and Intel have open source drivers now, so the reason isn't good enough to justify X as a near universal standard.
At 18m30, he mentions that everyone uses SHM and DRI2, which don't work over the network. He mentions X isn't "network transparent" anymore, it's "network capable". I'm not 100% sure what the difference is tbh.
At 40m10, he mentions that rendering on the "server" then compressing it to transfer over the network to display locally is basically VNC.
That's interesting. I didn't realize the SHM extension was used that widely. Do most GUI toolkits use it under the hood somehow, in order to render more efficiently?
When I first realized you could do this I was blown away. Just set `DISPLAY` in your environment to an X server (any X server, anywhere) and any applications you run will appear there.
Of course, there are immense problems with this. For one thing, it's usually crippled by latency. Even if the client X application is in a Docker container on the same machine, applications are only halfway usable. Use a local Ethernet network, and things start to fall apart. Going over the Internet isn't really worth trying.
The other major problem is that there's no security. If you have an open X server, anyone can connect to it and run random applications on your display. But in addition to opening you up to annoyance from cyber-hooligans, allowing an X application to run gives it access to all your keystrokes, which may not be something you want. Beyond that, all the X data is going out over the network unencrypted. There's just no security model here.
That said, it would be cool if there were some kind of plugin for Wayland that would let you use this feature (or even something better, if someone wanted to implement that). I've wondered---can you set up XWayland to be an open X server and just let random X applications connect to it? It would be interesting to hear from Wayland experts on this point.
- X is burdened by this abstraction layer for networking
- Since people are using ssh as an app to do it, a case could be made that the networking is not really necessary and a dedicated graphics specific app might work just as well.
This would, for instance remove the possibility to run an interactive program on a CUDA server mounted in a rack in the server room while displaying the results on a workstation without the likes of 'VNC' to screw up your display and/or huge latency.
But, X doesn't really do that, assuming you mean OpenGL running on the GPU. If you did forward an X app on that server to your desktop and managed to get OpenGL to work, it would use GLX to forward OpenGL commands to your GPU on your workstation and completely ignore the server GPGPU hardware.
What I think you are describing is closer to what Wayland plus a remote display protocol might enable. Let the app use its beefy GPGPU hardware on the server and just blast the 2D frames to the workstation on each OpenGL buffer-swap, to be composited into the local framebuffer that drives the actual display. All the expensive texture-mapping, geometry calculation, and pixel shading would execute on the remote server.
I think this discussion is probably petering out, and unfortunately we're talking past each other somehow. This is probably my last time revisiting this article and its comments, but I thought I'd give one last effort to expose the assumptions that may be at the heart of our disagreement...
You have mentioned multiple times terrible performance and latency, but I feel this is a strawman argument. People are streaming HD and 4K video over the internet all the time now and watching live-streamed video games, where someone is playing a GPU-intensive application and having the output encoded to an efficient video compression stream in real-time. These compression algorithms are also being used in practice for video teleconferencing solutions all over the place, by many vendors. The latencies are low enough to have normal conversation, movement and speech. They work reasonably well over consumer-level, low performance WAN connections and can work flawlessly over faster WAN or LAN paths. I don't understand how this contemporary scene suggest remote display is impractical.
Also, please don't get caught up in the VNC-style remote desktop metaphor as the only way to achieve remote display. An X-over-SSH styled remote protocol could just as easily be designed to launch applications on a remote host, where it is able to allocate server-side GPU hardware resources and run a renderer off-screen which buffer-swaps directly into a video codec as the sink for new frames, pushing those compressed frames to the user-facing system where they are displayed as the content of one application window. There is no need to conflate remote-display with having an actual screen or desktop session active on the host running the application. This is much like SSH itself giving us any number of pseudo TTY shell sessions without requiring any real serial port or console activity on the remote server.
If you didn't mean that the "CUDA" hardware in the server is doing rendering (like in Google Stadia) then I am guessing you mean that the application is doing CUDA data processing as part of its application logic before generating separate display commands to a remote X server to actually visualize the results. I am having a really hard time understanding why anybody would want to split and distribute an application like this, because every such data-intensive visualization I am aware of has much tighter coupling between those application logic bits and the renderer than it does between the renderer's output buffers and the photon-emitting display screen. This is why there are extensions, e.g. in OpenCL and OpenGL, to share GPGPU buffers between compute and rendering worlds. The connection between the application data-processing and the rendering can be kept local to the GPGPU hardware and never even traverse the PCIe bus in the host. Massive data-reduction usually happens during rendering (projection, cropping, occlusion, and resampling) and the output imagery is much smaller than the renderer state updates and draw-commands which contribute to a frame.
For what it's worth, I do know of cases where remote display is impractical due to latency, but these are also cases where remote rendering are impractical. For example, an immersive head-mounted display may require a local renderer to have extremely low latency for head-tracking sensors to feed back into the renderer as camera geometry so rendered imagery matches head movement. A remote rendering protocol would be just as impractical here, because the rendering command stream that embodies this updated camera and frame refresh would also have too much latency to accomplish the task. The only practical way to distribute such an application would be to introduce an application-specific split between the local immersive rendering application, placed near the user, and other less latency-sensitive simulation and world-modeling which can be offloaded to the remote server. A rendering protocol like X doesn't provide the right semantics for the asynchronous communications that have to happen between these two stages. Instead, you need many of the features reflected in a distributed, multiplayer game: synchronizing protocols to exchange agent/object behavior updates, local predictive methods to update the local renderer's model in the absence of timely global updates, and a data-library interface to discover and load portions of the remote simulation/model into the renderer on demand based on user activity and resource constraints in the local rendering application.
It irks me too, but I can't really argue with the decision. X11's model of remote rendering is flawed; try starting up Firefox over a tunneled SSH connection sometime.
In order to justify network rendering at the windowing system level, I think you need a model that could reasonably give good performance to both local and remote applications without application developers having to care.
Sun's early windowing system was NeWS, which had a PostScript interpreter running on the display terminal. I think it was the better idea in the long run, but at the time X was easier to work with and wasn't patent encumbered...
That's what is generally bothering me about this whole Wayland thing: it does a (small) fraction of the things that Xorg does and calls itself a better alternative.
It doesnt work the same. On Ubuntu I have found that you have to log in first on the machine so that the vnc server starts before you can use a vnc client to connect. It kind of sucks to have to go tot he machine to login each time it is restarted and leave the machine logged in.
We have a better remote terminal client today than anything X ever achieved: the web browser.
It's not as good as a local display for resource intensive stuff, e.g. games, but X never was either (it was abysmal, in fact). It's a hard problem, but given that price/performance of computational power has mostly run ahead of network speed for most of computing history, and latency is a (mostly) insoluble problem for long distance networking, Web Assembly is often a better solution for pushing even resource intensive applications out to the edges.
I have a (IMO) next little thing that runs a docker image of my "ideal" workstation and then XForwards the GUI to the host machine - this way I can have the very same immutable workstation on different machines.
It totally depends on X. I don't even know how to run wayland.
I am surprised that X was / is so under funded and under supported.
fyi, there are two ways of using X across the network. The X forwarding via an ssh tunnel that you refer to, is technically rendered on your machine where you forward the DISPLAY from. The X client on the remote side is a 'client' and connects to your forwarded (local) display. The rendering happens in that display server (local) that dumps the result into your graphics device framebuffer/onto your screen. The instructions though come from your client on that remote machine, incl. any drawing primitives like drawing a line etc. But still the actual resources that client uses are local to your X server.
The other approach is (and that was like in the older days), when you avoid any tunneling but let X clients connect to open X servers across the network, like if your X display server listens on all NICs for clients. This setup is still technically possible but pretty much discouraged for security concerns. But with this setup you could also run a local xterm or whatever X client against a remote X server (like your neighbours ;), assumed it would accept your connection.
Uggh. RedHat has a long history now of replacing working stuff with half baked, regressed-by-design replacements.
I’d really like to have a working Unix environment, and Linux stopped being that for me years ago.
More conservative alternative distros keep bit rotting due to upstream (usually RedHat induced) breakage. (SystemD, Wayland, DBUS, PulseAudio, Gnome 3, kernel API brain damage, etc)
Does anyone know of any alternatives? Is there a distro/foundation that will actually support open source Unix moving forward (BSD, maybe?)
I use OpenBSD on my Thinkpad T410i and Void on an older AMD box. You can use vmm to run alpine images to run docker apps or Linux stuff. My only gripe is firefox runs slowish on OpenBSD though chrome doesn't suffer as such. I use both and only chrome if FF cant handle the site im on.
9front's vmx is mature enough to run an alpine VM and then native x client so the vm runs without an x server and the linux applications run in individual windows giving you a seamless 9front/linux desktop that can run a full browser. This route is only for the truly enlightened ones.
Unix philosophy isn't about keeping the cruft around but building useful systems using simple tools. The idea is do not duplicate functionality if it already exists. I see so many programs attempt to sidestep dependency hell by baking in functionality. This is the overall communities fault and the fault of modern software development which somehow lost the art of pragmatism. The simple part has been long lost in a sea of complexity driven by ignorance and fueled by greed.
OpenBSD is a modern, innovative Operating System that has retained classic Unix sensibilities. I also think that, even as a lover of Linux, a Linux monoculture is bad for the world.
The simplicity of OpenBSD makes it much easier to learn deeply than Linux. Where Linux has tended to create magical veneers over complexity, OpenBSD has tended to simplify the underlying systems.
Also, the OpenBSD project is shouldering the weight of the world and deserves more credit for this. So much of OpenBSD has made it into other parts of the Internet ecosystem.
I don’t know enough about these domains to know whether there is anything intrinsic to the design of OpenBSD that makes it better or worse in this regard, but I suspect that if you are correct, it’s more a function of mindshare and number of people doing the work.
I think some of us feel that portions of the Linux world have made some wrong or at least undesirable turns for some things. It seems the choice is to either go along with those choices or improve other paths.
Also, not sure what issues you see, but there appear to be a bunch of the standard open source games in OpenBSD as you’d see in Linux. Obviously not the commercial games that have Linux ports, but that strikes me more as an issue of popularity than technical limitations.
Design and implementation are two different things. I can imagine an OS perfect for game programming, but I can't use it, because it's not implemented.
I tried OpenBSD for 2 months, and went back to MacOSX. First, package repositories were meh (either pay, or recompile from source, or never update).
Second, OpenBSD graphic driver support was just very bad - no drivers for GPGPUs.
Third, lack of drivers, applications / services for interfacing with the real world and external devices.
I come home, open my laptop, click on select monitor, and I see 3 monitors available in the WiFi network (two TVs, and a desktop monitor). I can just click one, and my display is mirrored in real time to them - I can play a video, and it is mirrored perfectly, and with audio. Same at work, want to show a video in the presentation room? Connect to the beamer per wifi with a single click, display mirrored, great. Phone detects wifi, laptop backs it up to network storage, synchronizes media, updates software, Smart watch synchronized, etc.
None of that worked with OpenBSD. I had to find a HDMI cable in the basement to connect the laptop to the TV, monitor, beamer,... I had to use a USB cable to connect the phone to the laptop, which I then couldn't back up or synchronize without a gazillion brittle workarounds, etc.
Losing support for all modern devices felt like going back to the stone age. Linux isn't really much better, but while I like OpenBSD design, philosophy, and parts of the implementation. The current implementation doesn't work for me.
Yeah, that was my point: of course the more popular platform is going to have broader support. So the question is whether or not it’s possible to improve the less popular thing. My guess is that, with investment in time, there is nothing that prevents straightforward improvements in support in OpenBSD.
Why would OpenBSD be worthy of that expenditure of effort? Because it does some other things very well—perhaps better than Linux (Unix simplicity) and macOS (fully open). Not to mention security.
It sounds like you want a feature set that isn’t in OpenBSD. Fine, use macOS if those features are must-haves for you. Or, if OpenBSD’s feature set is compelling to you, start hacking! As I said elsewhere, I think it’s an easier path to make OpenBSD into what I want than it is to make macOS / Windows into what I want.
As for hardware support, I selected hardware that is well supported by OpenBSD, and it works great. Is there hardware I wish was better supported? You bet. But I’ve felt that way since I started using Linux and BSDs (and OS/2 and BeOS) in the 90s.
In todays world of "throw-away" electronics, there are a lot of devices that I just want to "just work". By the time these are supported on OpenBSD (if I were to implement it), I'll probably be owning a different device.
This is sad, because while I can select the hardware for my laptop, I can't really select the hardware and devices I need to interface with.
That would be quite interesting, since it uses the exact same graphics rendering code Linux uses, both in the kernel via DRM, and in userspace via Mesa and Gallium.
Linux graphics support is quite good as long as your driver is open source and in-tree. I was just talking the other day with the graphics team about how we haven't really had any problems on Linux/Intel because everything just works.
Mac, on the other hand... oh boy. It's pretty much a disaster. There is no end of bugs in Apple's graphics stack, and no, Metal doesn't solve them all. The first time I ran the Metal port of my library, my MacBook Pro instantly kernel panicked. I have never had that happen on Linux.
Even the NVIDIA drivers on Linux are superior to macOS' drivers. The problem with the NVIDIA drivers is that they don't play well with the rest of the ecosystem, but they do work. The only area where Linux is behind in graphics is on mobile, where the GPU vendors are perpetually unable to write working drivers.
I'm a professional graphics developer and I prefer the tooling on Linux to that of macOS.
Like most software developers, graphics developers develop where the users are. That's why they predominantly use Windows. Some use macOS, primarily to develop iOS apps, which is, again, where the most valuable users are, particularly in North America.
Linux desktop market share, or lack thereof, has nothing to do with graphics tooling.
Sure it does, why bother with a market that not only lacks customers, it makes it a major hurdle to develop for?
While anyone on Apple and Windows platforms can enjoy Metal Frameworks, DirectTK, Unreal, Unity, CryEngine, PIX, Poser3D, Photoshop, Houdini, Cinema 4D, AfterEffects,...
On Linux, I guess having the freedom to use half baked copies of them, or hunting for libraries that should come out of the box with any graphics stack like 3D math, mesh handling and loading materials, is more important.
I suspect there are plenty of people in this thread who’d love to see top tier games and development AND run on an open platform. In that sense, OpenBSD and Linux are both closer to supporting high-end games development than Windows and Mac are to being fully open platforms.
Your initial analogy is wrong, then, since—if modern Linux is also bad for those things—then OpenBSD being bad for those things wouldn't thereby make it an example of the OS "retaining classic Unix sensibilities" that modern Linux does not.
So other than SGI and NeXT, which although UNIX based had their focus on other kind of development stacks, which UNIX sensibilities are so great examples of graphical applications and game development tooling?
I think you misread my statement? I wasn't disagreeing with you. Rather, I was pointing out—in way of explaining why everyone was jumping to argue with you or downvote you—that the most obvious reading of your comments are an incoherent argument, and you should probably clarify what you mean.
In your first comment, when you said "Like being quite bad for graphics programming and game development", you were implicitly forming the larger sentence: "[Retaining] classic Unix sensibilities ... like being quite bad for graphics programming and game development."
And—since the topic of the thread was "OSes that are better because they avoid going down the road of RedHat-like 'modern' Unix sensibilities"—you were implicitly forming a larger assertion: "[OpenBSD, because it retains] classic Unix sensibilities [unlike RedHat] ... [is] quite bad for graphics programming and game development[, unlike RedHat, which is okay at those.]"
And so that's what people tried to argue with you about, which I hope makes people's rebuttals to your first comment make more sense to you.
But then, in your second comment, you went ahead and said that modern Linux is equally bad at doing these things. So clearly the expanded form of your assertion isn't what you meant.
RenderDoc works great on Linux. I prefer it to the Metal debugger for the simple reason that it actually works without regularly crashing and popping up a "please file a radar" box.
Either you're talking about high-end gaming, in which case Apple isn't in the game at all - the only players are Microsoft and Sony, though Google is making a bid now as well with Stadia.
Or you're talking about low-end gaming, in which case Google probably matters most because of Android.
OpenBSD isn't performant enough to be a daily driver on a laptop, unless your laptop spends all its time plugged in.
The battery life on my X220 was a solid hour less under OpenBSD than under Debian, and it ran some 10 degrees hotter. Yes, this is post-apmd improvements.
I want to like OpenBSD but code correctness just isn't enough for me.
OpenBSD (and the other BSDs too) are also pretty far behind on things like wireless hardware. There just aren't enough active developers to get a lot of the 802.11ac stack implemented (and ax or Wi-Fi 6 is coming around the corner too).
If there was a current port of Docker for FreeBSD and Wi-Fi ac support, it would totally be my daily driver right now. Currently I still use Gentoo on my laptops.
It has been a long time since I used a BSD in anger, but aren't jails pretty much a take-it-or-leave-it system? Docker containers are an aggregate of Linux namespaces and cgroups and can be manipulated in a pretty granular way. I can, for example, share a networking namespace between two containers to have one inspect the traffic of the other (and do so easily).
Not saying you'd want to do that, but my understanding is that this is part of a lot of the clever networking you see in orchestration systems.
FreeBSD jails have some flexibility. You can choose to pass through the host networking as-is, or pass a limited selection of ips; I think there's a way to have a more separated stack that that too, but I haven't used it. You can allow raw sockets or not, you can shield the processes or not, same with IPC.
I haven't had a need to have a jail inspect traffic of another, I suspect that might be tricky. However I've used them successfully as a lightweight alternative to a vm for QA/dev environments -- use hard links for the base OS to save space, and give each jail its own ip and you get fairly cheap multiple boxes. I've also used it them to contain statically compiled binaries -- TLS terminator runs without access to much of anything, if the next vulnerability after Heartbleed was worse, it would be a lot harder to escalate vs common deployment with OpenSSL linked into a webserver; similar with an environment running ffmpeg.
There are already so many Dockerfiles out there and guides based on them. Plus you get filesystem layering, the cgroups, process limits ... I have a feeling it would be possible to build a container system that's Docker API compatible, but uses zfs and jails under the surface. I think that's what the old "unmaintained" port did a few years back.
I love Gentoo. It's a great developer distribution. I've been using it since 2003 I think, and the current Gentoo install I use I have continually updated since 2012. When I get a new machine, I just copy the existing install over and build a new kernel.
Same, been using it since 2002, my current image has been rolling since 2008, that's when I switched to amd64. Never cared about the performance tuning aspect, it's all about flexibility and control.
Gentoo doesn't use systemd by default, it has its own init system called OpenRC that's kinda sorta like a framework for sysv. Despite being shell based, it's very fast, my system can get from GRUB to Firefox in less than ten seconds if you enable autologin and add Firefox to your.Xsession.
Gentoo maintains forks of udev and logind, imaginatively named eudev and elogind. I use eudev, but elogind is unnecessary for me because I don't use Gnome3. But if that's your thing I'm not stopping you.
Pulseaudio is easy to remove by virtue of Gentoo's source based package model. Dbus less so; many applications have a hard dependency on it.
I think it's possible to use a BSD kernel, but you will very likely deal with a lot of breakage.
Gentoo is just so much easier to administrate that other distros it's unreal.
Unless you are trying to run software that explicitly depends on some of the brokenness from upstream (Gnome3) you can just setup a classic system that will behave the same way you're used to and honestly, I didn't encounter problems with software in that configuration
No. There are the xBSD derivatives which continue to aspire to being "unix" like rather than "windows" like.
This is probably controversial but my guess here is this; The bulk of the developer cohort doing most of the work in Linux userland these days cut their teeth on Windows, it is their internal standard of a "good developer OS experience" so when they build new things, they have that model as "good" in their mind.
The challenge continues to be software. If you are using the packages in FreeBSD (or other xBSD derivatives) it is not uncommon to see "package xyz does not have an active maintainer so it may break, if you're interested in becoming the maintainer go here ..." Not enough people to cover all the things that are pumped into the Linux ecosystem everyday.
And it is a bit too much work to re-create the old Sun Microsystems of old where the kernel and the core UNIX user land tools from BSD were combined with a bespoke window system, compiler suite, and hardware specific system libraries to make a product. Granted it was fewer than 1200 software people all told, when SunOS 4.x came out but they were all working for market salaries on the project full time. That's like 10 - 20 million dollars a year, not something you're going to do "for free in your spare time."
systemd was modeled after launchd in MacOS, rather than Windows. And the need for replacing SysV or BSD style init in Unix land has been acknowledge for a long time (see Solaris, for example).
I don't see it making much of a difference for developer OS experience though? Unless you are specifically developing for example a Linux distribution.
Sure, FreeBSD. The whole "braindamage" is still available, of course, but you can use anything else if you wish. Everything is conveniently available as both ports and packages, so you don't need to mess with installing things by hand.
I've been experimenting with OpenBSD, and it's been pretty nice. The code is beautiful, and a lot of stuff just works. Not sure how it will behave on a laptop though.
I bought a ThinkPad specifically to install OpenBSD. It's a great combination. I find OpenBSD much more straight forward and coherent than Linux. There are some bells and whistles that I wish it had, but in terms of the foundation it is wonderful.
How is the graphics support in OpenBSD for the ThinkPad, and which model?
I've found the recent P and X models underwhelming as far as hardware goes, and support in Linux hasn't been great in my experience. While I'd like to give BSD a try, I fear I'll have to deal with even more driver issues.
For example, all video out ports on my X1 Extreme are wired to the dedicated NVIDIA card, which forces me to either use Nouveau which causes kernel panics, or use huge binary blobs from NVIDIA, which I'm reluctant to do for several reasons, one of which is breaking my initramfs image and making the system unbootable.
I'm quite tired of the amount of these types of issues in the Linux ecosystem, so I'm very interested in trying something simpler and better thought out, as long as the hardware support is decent.
Only reason why hybrid kernels and Linux is popular is the stable driver API, otherwise recompilation is necessary (which was new to me when I discovered it).
OpenBSD is suited for simple workstations or as a tablet substitute, but I don't think the OpenBSD desktop will quite take off, unless driver developers will return to releasing obfuscated code.
That is for in kernel drivers, as stated. It something breaks in kernel, it just won't compile with the kernel, but user space drivers have great ABI compatibility as stated in the link you provided.
Not an expert though, might be wrong but the text makes it pretty clear
I think it would be great on the right laptop. Most of the OpenBSD developers use thinkpads, from what I can tell.
It wouldn’t fly at work for me at the moment, and I don’t do much computing at home, so I played around with it a bit on a laptop, but didn’t switch. The results were encouraging, though.
My biggest beef with it right now is the upgrade story, but that’s because I need to update my router (through a serial console!) and am afraid of what happens if I brick it.
The thing is, it takes a lot of resources to do this well and there is not a lot of financial incentive to invest in this area. Canonical gave it a really good shot with Unity and Mir, and received lots of criticism for doing so. If we're going to criticize anyone who does something differently, then we need to accept that the project sponsored by Red Hat is the one that everyone will end up being used.
Strangely that was not clear to me when I first started using arch. Probably because when I got to setting up networking it gave me the choice of the systemd way, or a variety of other ways.
But looking at it in relation to init systems it clearly says systemd was chosen.
That said, arch still lets you make decisions, unlike most distributions, where you become a consumer of other people's decisions.
If you want the most BSD-like distro, it's probably gotta be Slackware. It changes little between releases (just newer versions of the software it ships with mostly).
I’m worried that, with core dependencies like X11 being abandoned, the writing is on the wall for the hold-out distributions (I run Devuan and OpenBSD mostly).
I get that it doesn’t make business sense for RedHat to maintain parts of the open source ecosystem that will never drive consulting or support revenue, but that doesn’t change the fact that I want a computing environment that just works (including software that’s been stable for a decade, and isn’t constantly ported to the new shiny).
Consider: what would it mean for an OS to be "stable for a decade" in the face of generational hardware changes?
For example: we have 4K monitors now, and 8K monitors soon. OSes before ~2010 didn't really support DPI scaling (with mixed-DPI display layouts, etc.), because we didn't have those monitors; but now we do. OSes had to add DPI scaling support throughout the whole stack to support these monitors in the way people expected.
This required a nontrivial rearchitecting of the components of some OSes, because they had been built in a world where you rendered e.g. fonts by caching fixed bitmap tile-handles with the (non-DPI-aware) pixel size of the font as the key. In the case where you have two monitors of differing DPI plugged in, that cache spits out wrong tiles for at least one of your monitors.
So, what would you expect this "stable" OS to do when you plug an 8K monitor into it?
Or, another example: touchscreen tablet support, implemented by pen and gesture input events becoming the lower-level input-event stream, and pointer events being reimplemented on top of them. What would you expect your stable OS to do if you installed it on a tablet?
Sometimes, OSes have to rearchitect things. The reason is not always "FEATURE: now written in a cool new language!"; sometimes it's "BUG: it just doesn't got-dang work to do things this way any more."
Out of curiosity, why would you associate it with atheism? Since many (most?) Christian denominations believe it's forbidden to curse in God's name, it's actually more likely to have theistic origins.
I don't think it's a signal either way as to belief or nonbelief, though. It's more likely to just be a part of their vernacular.
Finally, as an atheist I can tell you that I have no need to avoid using the word "god". What's it going to do, cause a god to pop into existence if I say the name thrice? :) That would certainly be interesting.
Slackware's release cycle is way too slow and it gets slower with each release. The current stable version (14.2) was released three years ago and it doesn't support modern hardware. Additionally, due to a bug in the installer, it's impossible to install it on NVMe drives.
Also, I don't understand why does it ship with KDE4 instead of Plasma 5. Even in -current it's still the case.
I applaud the design and ideas in GuixSD, but the lack of support for non-free software is a deal-breaker for me. I wish they'd relax this requirement and added an optional official non-free repository with a limited selection of software. A fork with support for non-free repositories would also be great.
So far I've seen individual projects that have done this, but seem abandoned[1] or overwhelming[2] for a Lisp newcomer.
Just to clarify, these complaints are primarily about Fedora specifically -- at least in my experience RHEL (and CentOS) have been very solid. I just wish that Fedora had a model similar to Ubuntu, where periodic releases are targeted for longer term support (i.e., get patches for 2 - 3 years), and all the potentially breaking items go into the inbetween releases.
The biggest difference is probably that they seem to take a 'older' Fedora, and then stabilize it a bit before blessing it as the new RHEL, where Ubuntu just forks off the LTS from their regular release (and where a lot of people seem to wait for the .1 release of the LTS before actually using it)
This is true, but the vast majority does (if you use EPEL and RPM Fusion). I've also had pretty good luck using Fedora rpms from the same era. Gotta stay on top of CVEs tho so you don't end up running a vulnerable application.
> I’d really like to have a working Unix environment, and Linux stopped being that for me years ago.
That ship sailed very, very long ago. The moment the GNU/Linux community decided to implement their own desktops and not copy the standard Unix desktop - CDE, the die was cast. From then on we lost any possibility of having a unified standard desktop and instead we have two competing GUI toolkits neither of which can truly be called native.
I am still optimistic that perhaps Wayland will raise the bar for making and maintaining your own DE so high that due to practical considerations we will end up with just one de-facto desktop API.
It's about time. A 10+ year transition from X.org to Wayland is long enough.
X.org is open source, it is does not belong to Red Hat. It's a truly desirable technology, anyone else is welcome to contribute to the maintenance of it or fork it.
When no one else wants to maintain it favor of newer technologies, that's a good sign it's ready for retirement.
In those ten years, clusters in remote datacenters went mainstream, but Wayland has never delivered network transparency except by embedding X. Where are these users who are still running everything on one computer under their desk?
I think the demand for remote GUI apps went down, for several reasons:
- Web apps got better and more common
- Windows (and MS SQL Server etc), which are some of the big OS-level reason for using remote GUI management tools, has improved its command-line management story and added UI-less server editions
- Network improvements (more bandwidth, more clouds and POPs) made existing 'remote desktop' protocols more tolerable on the Internet, all the way back to the venerable VNC and RDP
What kind of remote UI app/workload do you see as common today?
Do you commonly encounter people who use networked X for scenarios that would not be better solved by other protocols? Cause I have basically never seen that in the last decade.
Not true. Chrome OS uses Wayland and has 40% of the Linux desktop market. (Source: https://www.statista.com/statistics/218089/global-market-sha... 1.1% Chrome OS vs 1.6% Linux ). Given trends, Chrome OS alone could make Wayland the dominant Linux display server in a couple years.
Chrome OS using wayland is almost as relevant as the fact that Intel uses MINIX in its chipsets is to MINIX's market share, its almost entirely Google internal, with very tight constraints on its environment both in terms of hardware and software, it does not generalize to the general Linux use case, or have much relevance to developers and development outside of Google.
Wayland could just as easily be HypotheticalDisplayFooServ and the result would be the same for everyone that isn't Google.
Sad. It's probably unpopular but wayland is not ready yet IMHO and lacking on a conceptual level. Yet another few years/months until most bugs are fixed and more broken functionality...
This just means that development of _new_ features and research should take place on Wayland where it belongs. From what I've read X has been a long evolution into a hodepodge of historical but effectively dead interfaces with newer ones crammed along side... it doesn't make sense to continue the tradition of cramming more features into X, but that doesn't stop people developing new things on top of it while Wayland matures.
You mean _parity_ features? SSHing into a server and launching a small GUI tool is still dead. It was killed by Wayland intentionally and willfully as a fundamental design idea. In fact, the feature of doing that is considered "not part of wayland" by Wayland designers.
The root problem is that wayland replaced half of x.org and left the rest for someone else to figure out.
There have been a few attempts to patch the core library, most notably [0], but those have fizzled out, probably because they'd break backwards compatibility. (Another victim of "We'll figure it out later".) The most recent work appears to be a work in progress tool for this at [1], but it's also the sort of thing that could have, and should have, been done 10 years ago.
PipeWire is video streaming, not remote rendering. They're assuming a rack of GPUs in the datacenter and a virtual circuit dedicated to me, but in reality I have a congested link and one GPU attached to my display.
The question is, which is more data. Rendered result or the data to do the rendering.
If the rendered result is lighter, then a video stream makes perfect sense.
If sending the data over to render is lighter, there's still one more issue: that GPUs are still not interchangeable, and the server hosting the application needs to have the necessary information about all the possible GPUs, their quirks and limitations. And you might still get inconsistent results at various clients.
My personal guess is that the data to render is a lot more than data the required to do video streaming. So much depends on doing CPU rendering directly to textures. Although more and more is moving to GPU, so...
X forwarding is pretty awful over congested links and high-latency links in my experience.
And from what I've seen if a program has even a medium-low level of graphical intensity it's not going to interact well with X forwarding. So anything that previously worked, should still work without a GPU.
Synergy (https://symless.com/synergy) is something that I've been searching for a good, Wayland-compatible, replacement for. It's simple, but I've gotten very accustomed to the ability to plop down a laptop (Mac, Linux, whatever) next to my desktop and instantly have another screen-worth of real estate to work with.
Wayland, by all accounts I've seen to date, just doesn't have the hooks to allow a tool like Synergy to exist. :(
It's sad that maintenance mode != dead. They should have put X-Windows out of its misery decades ago!
What's actually sad is that its replacement, Wayland, didn't learn any of the lessons of NeWS (or what we now call AJAX) and Emacs.
They could at least throw a JavaScript engine in as an afterthought. But it's WAY too late to actually design the entire thing AROUND an extension language, like NeWS and Emacs and TCL/Tk, which is how it should have been in the first place.
NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:
+ used PostScript code instead of JavaScript for programming.
+ used PostScript graphics instead of DHTML and CSS for rendering.
+ used PostScript data instead of XML and JSON for data representation.
Designing a system around an extension language from day one is a WHOLE lot better than nailing an extension language onto the side of something that was designed without one (and thus suffers from Greenspun's tenth rule). This isn't rocket surgery, people.
>If the designers of X-Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles — but you’d be able to shift gears with your car stereo. Useful feature, that. - Marus J. Ranum, Digital Equipment Corporation
That's the Wikipedia summary, but the details matter heavily. NeWS was developed in the time before "personal computing", for workstations where all software was trusted. The threat model was "what threat?"
It also baked into assumptions about the graphics model and where the expensive parts are. It has the same fatal flaw as the X11 graphics model: that rendering will be far slower than I/O traffic, so socket bandwidth will never be the bottleneck.
Embedded programmability has ended up being a security disaster. It was perhaps ahead of its time, but we understand its flaws and principles now.
You seem to be misunderstanding that I'm advocating that X-Windows be replaced by NeWS in 2019. That's not at all what I'm saying.
I guess you're one of those people who always uses the noscript extension in your browser, and drives to the bank, pays for a parking place, and waits in line instead of using online banking.
Have you ever used Google Maps? Are you arguing that everybody should turn off JavaScript and use online maps that you scroll by clicking and waiting for another page to load?
You must really hate it when WebGL shaders download code into your GPU!
Considering I'm the author of one of the more technically advanced WebGL apps out there ( https://noclip.website/), I understand the power of the web as an application delivery platform.
NeWS-style programmability does not have the same advantages. I cannot host a NeWS application in one place and send a single link to let others run it.
The NeWS architecture put graphics rendering responsibility on the server with Display PostScript (also a mistake X11 made), and the scripting was so you could design a button in one process and instance it in another. It was a workaround for the lack of shared libraries, not a way of moving computation to a data center (the reason you use AJAX).
No, NeWS didn't use Display PostScript. A common misconception. Adobe's Display PostScript extension to X11 didn't have any support for input, event handling, threading, synchronization, networking, object oriented programming, user interface toolkits, window management, arbitrarily shaped windows, shared libraries and modules, colormaps and visuals, X11 integration, or any of the other important features of NeWS.
We actually wrote an X11 window manager in NeWS, with tabbed windows, pie menus, rooms, scrolling virtual desktops, etc. Try doing that with Display PostScript.
And it actually performed much better than was possible for an X11 window manager, since X11 window managers MUST run in a separate process and communicate via an asynchronous network protocol, incurring lots of overhead like context switching, queuing, marshaling and unmarshaling, server grabbing, etc.
You seems to have a lot of misconceptions about NeWS, and how and why it was designed and implemented. I suggest you read the X-Windows disaster article I wrote in 1993, and the original paper about Sundew by James Gosling, "SunDew - A Distributed and Extensible Window System", which he published in Methodology of Window Management in 1985.
Again: I'm not advocating that X-Windows be replaced by NeWS in 2019. I'm saying that Wayland didn't learn from the lessons on NeWS. And that a much better solution would be to push Electron down the stack to the bare metal, to become the window system itself.
How do you reconcile your use of WebGL and JavaScript with your distaste for embedded programmability? Or do you just hold your nose with one hand and type with the other, the way I program X11? ;)
No response? I'd still like to know how you reconcile this:
>Embedded programmability has ended up being a security disaster.
With this:
>I'm the author of one of the more technically advanced WebGL apps out there
Do you believe your "more technically advanced WebGL app" is a "security disaster"?
If so, perhaps you should take it down, instead of linking to it! I'm afraid to click on such an ominous link to what you describe as a security disaster.
I had a response and even posted it for a few seconds but deleted it because the back-and-forth would continue so I just wanted you to have the last word peacefully.
But since you posted a second time I'll at least tell you why I didn't give you an in depth response.
> I guess you're one of those people who always uses the noscript extension in your browser
Imagine that not only your browser, but the entire system cripples because of poorly written trackers, ad spots with animated sprites floating over mp4 videos, and 0day exploits like the one that was recently found in the wild (and there was plenty of vulnerabilities in PostScript implementations as well, so in this sense your analogy is pretty much spot-on). Sure, that's the system of the future that everyone should've switched to decades ago.
Back in the day I worked on a system built on NeWS. Hand coding PostScript was an adventure. At the time NeWS certainly looked like the future. Oh, and the server side application (written in C) had ad-hoc version of about 3/4 of Common Lisp. Every function call passed an array of void star star
Wayland is literally never going to be ready unless it’s forced to be. How long has Nvidia promised to support it now, and still unless you’re using Nouveau it’s basically not usable on Nvidia cards.
That's been the argument for every Linux problem for at least as long as I've been using Linux. WiFi drivers used to suck. To the point that you had to be careful which WiFi card (yes, card) you bought, because only a couple of models worked on Linux at all. This situation took an extremely long time to get better. The same thing happened (and is still happening) with graphics cards. But at least most of them sort of work, instead of outright failing.
As long as enough consumers don't actively demand decent Linux (or BSD, whatever) support, it will never come. I'd suggest voting with your wallets, but with the laptop market being the steaming pile of shit that it is, that's virtually impossible.
The year of the Linux desktop...just like fusion power, is always just around the corner.
Sometimes manufacturers lie to us, though. Like the Thinkpad x1 Carbon was supposed to be "linux-ready," and then it ships with sleep functionality broken for a couple months until they managed to get a BIOS fix through.
I tried to vote with my wallet, but this is what Linux is like /shrug
It's actually easy to explain - ditch Nvidia, they aren't supporting Linux properly. Pretty clear message, and users either get it and ditch it, and if not, nothing you could do to help them. It's totally Nvidia's fault.
My Nvidia GPU was fine until gnome 3 happened. See, is not my fault. It had good drivers and decent support.
Now I use only Intel because... well, it works. But I don't bother with 3D gaming, of course.
My point is that you can blame the manufacturers because their support is broken, but you can't blame the users because they will use something else (I know I did: XFCE works great).
EDIT: Gnome shell was initially released in 2011. This may or may not have changed, for better or worse. I moved on.
I agree, you can't blame users for it, but you can totally tell them to try to switch. DE and Wayland compositor developers can't spend resources cleaning up the mess that Nvidia created by refusing to upstream their driver and preventing Nouveau from working properly as well.
Users not knowing or caring about these details is precisely why users should be kept away from the management of these projects. The details do matter, even if the user lacks the context or domain specific knowledge to make heads from tails of those details.
Sure, but they already know this. You or anyone else on Hacker News is not clarifying this for them.
Meanwhile, telling users who to blame does nothing for them. They don’t care. To them, the best entity to blame is the distribution, that’s pretty much their job.
But while blame doesn’t fix problems, neither does doing nothing. What I’m suggesting is, the move to Wayland needs to be pushed harder if anything is going to be fixed. Nvidia can’t hold it back, that’s not sustainable.
But again, users can’t do that. Distributions can do that. Distributions can tell people, sorry, we can’t support Nvidia, contact Nvidia, they already promised to work towards this, and what they produced is incomplete. If they continue to support X11, there's no urgency to fixing the issues with Wayland.
This isn’t really about blame though, it’s about responsibility. Somebody has to fix the problem. Open source projects like Nouveau can help, but Nvidia has blocked progress by requiring signed blobs and continually not providing access to important information (like technical bits for reclocking.) I think that is pretty much reason enough to suggest Nvidia should be shouldering the pressure and responsibility to deliver a good Linux experience, and we should absolutely hold them to it, until or if they stop blocking open source efforts on purpose.
Establishing blame would tell you who’s fault it is that something is broken. Answer is nearly useless imo. Only thing that matters is how do we fix it and possibly what can be done to prevent it from happening again. Saying Nvidia should fix it is different from worrying about who is to blame; if you frame it that way, they could fire back about how kernel licensing restrictions make their life harder, and nobody cares about that issue either.
> "Meanwhile, telling users who to blame does nothing for them. They don’t care. To them, the best entity to blame is the distribution, that’s pretty much their job."
I think there is value in telling users to vent their frustration in a productive direction; namely at Nvidia who has the power to change the situation, rather than at Wayland developers who are likely just as frustrated. I think we both generally agree on that.
Yeah, Nvidia is the right entity to bother no doubt, I just have a different idea of the tone and attitude. The approach I’ve seen before feels more like guiding an angry mob. I’m not really an expert but I feel what we need is firm decisions (drop support, it’s time) and clear messaging (vote with your wallet, and tell Nvidia you need this.)
Apple dropped support for Nvidia graphics. Linux will be next at this rate.
Still, I want to shy away from blame. Blame is complicated, can be deflected, and often leads to hatred. I prefer to say, Nvidia is responsible, nobody else can fix this.
In what regards Linux, the only customers that matter to NVidia are CUDA users (don't care about their UI on cloud instances) and Hollywood studios (have their own in-house distributions).
Drivers are often bad and nvidia's especially so but at this point if it's been years and you can't get a desktop window manager compositing on top of a GPU stack the problem seems increasingly likely to be you and not them. Blitting quads is not THAT hard. We've got vendors like Mozilla out there rasterizing entire webpages on NVIDIA (and AMD now, I think?) GPUs but Wayland can't reliably composite window bitmaps for some reason?
Is that still the case? I'm running FC29 with the nVidia closed-source drivers and I haven't had issues. (I have had issues on a laptop with switchable graphics, though, and Nouveau was the solution there.)
Is this new? Last time I tried, Nvidia on Wayland had performance issues and did not support Optimus at all (Edit: after rereading what you said I am guessing that is still an issue.)
If it is working though I retract what I said, though it’s absurd how long it took to happen.
Even without wayland, in recent Fedora releases (I forget if it started with 28 or 29), I've found optimus switchable graphics to be impossible or at least impractical to configure. What's become easier is having the entire X display use the nvidia gpu, with the intel gpu transparently slaved to relay pixels to its connected laptop display.
But, it does return the annoying, periodic driver breakage, where a dnf update replaces the xorg-x11-drv-nvidia RPMs and suddenly the userspace is incompatible with the still-running kernel module's slightly older API, so new processes cannot use opengl until I reboot. This was one benefit of optimus via the optirun infrastructure. The main desktop was on the intel gpu and I could actually unload and upgrade the nvidia module if desired, without forcing a disruptive reboot.
Do you have any documentation, or even random notes, on how you set that up?
I have an optimus laptop and couldn't make the proprietary drivers work ( tried both rpmfusion and negativo17). I m happily using nouveau for now but at some point I ll need to use CUDA again.
I have an older Thinkpad T440p with Geforce GT 730M running Fedora 29. I use the MATE desktop rather than GNOME.
I just installed the akmod-nvidia packages from rpmfusion which also pulls in xorg-x11-drv-nvidia and xorg-x11-drv-nvidia-cuda. Search for "PRIME" discussions in the README.txt under /usr/share/doc/xorg-x11-drv-nvidia. I cannot be sure every bit of my config is still necessary, as I left it alone once I got it working.
First, I have /etc/X11/xorg.conf.d/nvidia-prime.conf with a small amount of config:
To be honest, my old GPU is too slow and with too small RAM to be worthwhile for OpenCL. I find it more practical to just use the Intel OpenCL runtime for multi-core CPUs on this old quad-core laptop or on newer workstations. I do get some use out of xorg-x11-drv-nvidia-cuda on a Titan X desktop GPU though.
Why is your standard for a Linux desktop renderer whether a pretty Linux-hostile company's niche laptop energy saving hybrid GPU driver works well on it?
Last time I checked screen sharing applications couldn't work with Wayland (meet, slack, Skype, etc). All of them work only with X11. I need them to work with my customers so I can't use Wayland until this problem is solved.
Hadn't heard of Sway, but the point still stands... Most of the desktop environments don't support Wayland. Cinnamon is the big one for me but KWin doesn't either which likely has a much larger user base than anything other than Gnome.
I'm wondering if all the resources that went to Wayland for the past TEN years were directed to slowly evolve X11 to the same direction we would be in a worse or better situation.
It seems to me very similar to the Python2/Python3 debacle.
What pisses me off more than anything else is having to replace a lot of tools there are working perfectly with xorg but are not compatible with wayland.
Yes, I know that I'm using an unconventional setup, but the freedom to do that was one of the reasons that attracted me to a unix environment. Redhat is working very hard to change all of that for a "year of the linux Desktop" that (in my opinion) will never come.
>I'm wondering if all the resources that went to Wayland for the past TEN years were directed to slowly evolve X11 to the same direction we would be in a worse or better situation. It seems to me very similar to the Python2/Python3 debacle.
Are you arguing that a decade of constant breaking changes to a huge number, if not the majority, of desktop linux programs, and the accompanying maintenance challenges (as versions of window managers, Gtk and the like would all be tightly coupled to specific Xorg versions) would have been a better approach than a rewrite + compatibility shim (XWayland) that allows for a fair amount of backwards compatibility?
With the coming IBM transition, I'm expecting Red Hat itself to go into "Hard Maintenance Mode" fairly quickly. Another comment here estimates 10 years of RH support, but I'm confident that X.org will outlast RH.
If wayland gets abandoned it will have to be re-invented. It solves real problems with X11. That doesn't mean Wayland itself doesn't have problems though. Over time they will get solved. If people or companies need them to get solved sooner I m sure Wayland devs will welcome any contributions.
> At every step of the way, the underlying development emphasises security, performance and debugability guided by a principle of least surprise in terms of API design.
I'm very happy they started that list with security, which links to a nice security overview:
Oh yeah! Look at IBM's previous acquisitions especially over the last decade. They're one of the worst tech employers too, for anyone remotely senior. I give it a year max before the 1st RH layoff wave (in IBMese, a "Resource Action" or RA).
The Wayland developers have a security model [-1] that is hostile to "power-users" (those who like to use the Unix (or other OS) programming environment to its full potential) and the visually impaired (eg., blind). See [0] [1] [2] to see what features I am talking about.
It is possible to implement some of those features on a per-compositor basis, but the result of that will be graphical API fragmentation, as programs that interact with GUIs will need to have separate code for each compositor. And the work is not done even for Gnome (more precisely Gnome's Wayland compositor and the Gnome applications that use it) yet.
On the other hand one could say, eg. "Why not make a compositor accessibility protocol on top of Wayland?". End result of that, it is easy to guess, would be something worse than X Windows (because of even more layers of abstraction, and possibly even more incompatible standards/APIs/protocols), which the Wayland people were supposedly trying to escape from.
Edit: Another thing that makes Wayland (at least without an extension ...) unsuitable to replace X Windows is forced compositing. This means unavoidable double buffering and thus worse video performance (especially noticeable for interactive stuff like video games).
[-1] I prefer calling it security theater, because it does not bring any real security improvement in practice.
The X Window System has had a great run; a whole generation. And it's still going, even if it's trailing edge tech now. Thanks Bob Scheifler, Jim Gettys, and the whole crew.
Fedora 31 is looking to be the most advanced OS yet. If you haven't tried it yet..I strongly recommend trying it out (for those who don't know, Linux can be tried out using a usb drive without affecting your hard disk called "liveboot").
Fedora is a much more polished experience than Ubuntu and other than the lack of Sketch and Photoshop, I daresay at par with OSX.
What a blanket statement. Ubuntu is the same Gnome with three super popular extensions enabled, Yaru theme and some minor dconf tweaks. It even provides a simple gnome-session package in case you want an even more vanilla experience. The software selection is quite standard and sensible. The installer is good looking and idiot-proof. It's not that I dislike Fedora, it's just that it's not a more o less polished experience than Ubuntu.
From a normal users perspective it didn't seem any more advanced than Ubuntu. Exactly the same but installing software on the command line takes 2 minutes longer.
>>he reality is that X.org is basically maintained by us and thus once we stop paying attention to it there is unlikely to be any major new releases coming out and there might even be some bitrot setting in over time. We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum, but let this be a friendly notice for everyone who rely the work we do maintaining the Linux graphics stack, get onto Wayland, that is where the future is.
Yes, the constant refreshing of metadata is very annoying. You can disable it for searches, though. Gnome software sometimes becomes very unresponsive because of this (and because of flatpaks silently downloading a >1GB runtime). Probably a reason while it gets so much FUD.
I wish I could move to Wayland, but I make heavy use of X-only automation tools ( https://github.com/autokey/autokey for keyword expansion and cross-app macros, and wmctrl/xdotool for window switching using https://github.com/ronjouch/marathon ) and nothing similar exists under Wayland.
So far, it doesn't look like similar tools are planned, and neither GNOME nor Sway offer these features I'm looking for :-/ .
Oh, I was going to mention Sway too, since of all the compositors I've seen it is the only one that seems to care about replicating such functionality. Is it really still that lacking on the automation front?
"Extended Window Manager Hints, a.k.a. NetWM or Net WM,[1] is an X Window System standard for window managers. It defines various interactions between window managers, utilities, and applications, all part of an entire desktop environment. It builds on the functionality of the Inter-Client Communication Conventions Manual (ICCCM)."
, well we come to the same conclusion: it's Xorg-only and won't fly under Wayland. Or am I missing something?
Everybody's blaming Red Hat. Why should they maintain something they're no longer using? If a credible new maintainer would step up, I'm sure Red Hat would be happy to hand over the reigns.
>There's a lot of non-Red Hat contributors to X.org.
There's maybe 5 people that really "know" Xorg top to bottom and none of them want to work on it any more. The ones that were working on it anyway were being paid to do so and many of them are also working on Wayland on the side.
A lot of people have made small contributions, but that's not the same thing as being able to design and develop large new changes. The people with the knowledge to do that, don't have the will to do so any longer.
Allegedly: screenshots and autokey-like functionality
I don't know who says that Wayland is ready for "prime time", but I also don't know how a major distro would force Wayland without an implementation of those features.
No. The Linux stack is pretty much whatever Red Hat says it is. If Red Hat says X is moribund, that will prompt upstreams to drop X support from their toolkits and the other distros will fall in line and go full Wayland. Maybe Slackware will hang on for a couple releases more. No one's going to maintain that big chungus code base just to buck the direction the wind is very obviously blowing.
Not really. It is well known across the industry how you can get drivers in linux now. There many players, big (Dell, Lenovo) and small (system76, Entroware, etc) that sell linux supported devices.
I think OP was thinking about commercial players who do not like to upstream their driver code. This might be for copyright/GPL reasons, or for trade-secret / code obscurity reasons. This is impossible with linux drivers. To quote from your link:
So, if you have a Linux kernel driver that is not in the main kernel tree, what are you, a developer, supposed to do? Releasing a binary driver for every different kernel version for every distribution is a nightmare, and trying to keep up with an ever changing kernel interface is also a rough job.
Simple, get your kernel driver into the main kernel tree (remember we are talking about drivers released under a GPL-compatible license here, if your code doesn’t fall under this category, good luck, you are on your own here, -snip-).
Thing is, this excludes quite a lot of drivers from getting into the kernel. And that kind of sucks.
I’m noob what regards kernel specifics, so probably a stupid question: so basically if you want kernel to support all possible hardware in the world, you would need to add those drivers to main kernel ? seems like a bad idea because the kernel’s source code would not fit into terabyte size disk..
If you want your driver to run in kernel space you need that code to be in the main kernel.
Anything that runs in user-space can be kept outside of the main kernel repo.
There is a bad middle version where you put some interface exposed to user-space into the mainline kernel and then dump in binary-blobs to interface with this. The bad part is doing this if you will be the only user of that interface, and if you do it to intentionally keep your driver out of the kernel. I believe there was/is an attempt by nvidea to essentially get a shim for their windows drivers into the kernel.
The good version of this is where the userspace interface evolves naturally, and in cooperation between multiple consumers. Or at least a case where in the end there are multiple consumers of the interface in the end.
That writeup does a good job at explaining the current situation, but it doesn't completely refute the validity of the demand. The primary people who are asking for stable driver APIs and ABIs are desktop folks who want to be able to run closed source drivers for their GPU, and other consumer peripherals. Which means its a very limited set of architectures, or maybe just one set of architectures - x86 and x64. The other thing is that linux has a non-modular monolithic kernel design which means that simply having a stable API for GPU drivers isn't enough. you'll probably also need a stable PNP API layer, and a stable IO API Layer, and a stable file system layer, and whatever other OS services a driver would use. The argument about deprecation of interfaces and fixing bugs is valid, but everything is a trade-off in OS design. Linux's implementation of a monolithic design turned out to be very stable, when in-theory, micro kernels have a vastly better design when it comes to stability. Whether theoretical benefits are realized at the ground level, depends on a LOT of factors. Personally, I think the effort required to create a stable API layer would be too enormous to undertake at this point..
> Linux on the desktop would be a reality already if Linux had a stable driver API.
Linux on the desktop (and other consumer devices) is a reality, via Android and ChromeOS. What's not a reality is consumer use of the userspace tools and desktop environments lots of people mentally associate with Linux, but I doubt very much that's about driver APIs.
And "Android's adoption shows that Linux doesn't need a stable driver ABI" is a funny position to take, given how many problems that's caused Android, to the point where Google is adding a way for vendor drivers to work across versions of Android.
Incidentally, the Windows Subsystem for Linux was reportedly the result of an effort to run Android userspace on top of the Windows Kernel that was at least somewhat successful, so your assertion is completely true.
This is especially bad with anything based on Chromium/Electron and fractional scaling at the moment. The Ozone/Wayland back end is still a way off it seems.
Why do you believe that? Apparently it's possible to build a somewhat functional version from their git branch. Anyway, chromium is just the first step. Then electron has to work with the wayland version of chromium.
I want to switch to wayland, but it’s impossible until Nvidia get their shit together on linux. I hear AMD has awesome open source drivers now, time to give them a shot!
Not only that, OpenCL is a lower-level alternative to CUDA, somewhat comparable to Vulkan/OpenGL relationship. Trying to actually write something useful with OpenCL is... rough.
Have been using Sway on Wayland since 1.0 came out six months or so ago and the only awkward part has been HiDPI (I have an Intel card so not effected by the nVidia support). Otherwise good riddance to X, honestly. It felt old 10 years ago! Here's hoping the major desktops etc can finally sort out the remaining instability/rough edges over the next year or two.
And it felt old 20 years ago when I started first using it! And I was on a college campus with networked Unix workstations in computer labs and departments. The model environment for this vaunted “network transparency” that never quite worked right, was never quite dependable enough to really put it into your collaboration workflow. Hell, I think only the CS department was able to get home directories fully implemented and useful.
Here's hoping that the wlroots protocols become more widespread so that one can actually make cross-DE tools like taskbars. I'm using tint2 on Xorg at the moment and there just isn't any alternative on wayland - you have to use what your DE gives you or suck it.
That means a plethora of taskbar extensions on GNOME, all of which suck (poor multi monitor support, poor workspace support). Having to have a separate software project for each DE is the way to get bit rotting projects nobody cares about.
I use Plasma, and test the Wayland compositor every couple of months. Last time I checked, many features were still missing, performance was bad, and I still got crashes occasionally. If it's really gotten into a usable state since then, that's great.
Resume from sleep causes a black screen. Crashes do happen occassionally, where the plasma desktop will die and restart itself... this happens more frequently with the latest releases. In recent releases, with multiple panels, sometimes the kicker menu doesn't open on one of them. On multi-monitor setups, it doesn't remember the monitor positions after reboot.
On the plus side, I think the copy-paste situation with XWayland is finally improved.
I really hope they consider that Linux' graphic stack isn't an end in itself, but a driver for running the relative wealth of graphic apps (GIMP, Inkscape, Krita, Blender, Firefox, and more) many people spent the last 30 years or so developing. Those aren't going to be rewritten from scratch into Electron-based apps or whatever, and loosing them, as well as BSD and Mac OS compat, is no option IMHO.
Why do you think those apps would need to be rewritten and what has electron to do with that? Most of these apps already run natively on top of Wayland, as they are written using a toolkit like GTK or QT and those toolkits support Wayland.
Also, there is XWayland, which implements the X protocol on top of Wayland, so any X application should continue to run. The only thing they talked about stopping development on is the standalone X server X.org.
I have been using Wayland on my desktop for a while and am quite happy about it.
It's just that there are no new GUI apps since about ten years or longer, neither on Linux nor elsewhere (with few exceptions). In that situation, it makes me nervous to hear about grand plans to refactor/make obsolete the graphic stack. To what end if there aren't any new apps coming anyway? But OTOH, if Wayland benefits or motivates the developers and saves resources, more power to them.
Even if there might not that many completely new GUI apps - I do think there are - any app updated, and most of them are, are automatically based on Wayland, if they link against a current Gtk/QT. And consequently make use of Wayland.
I actually think it is the other way around: Wayland gives Linux on the desktop a whole new life as it is creating the technological foundation for a modern UI.
GTK is now intentionally breaking API compatibility every release (but putting out a “supported until the end of time” release every two years).
Unless they are porting GTK2 (which supported GTK1 apps as well) to wayland, then a decade of desktop apps will have to be rewritten (and will probably just bit rot to oblivion).
Also, what about window managers? Those have to be rewritten for wayland, but they peaked in usability (for me) long ago.
Red Hat is going to keep maintaining X11 for at least the next 10 years (RHEL8 life cycle), I will be surprised if GTK2 hasn't already bitrotted into oblivion from other causes by then.
Electron needs to be pushed down the stack to the bare metal, to become the window system itself. Then there will only be one instance of it running, that every other application can share, and the window manager and user interface and networking can all use standard web technologies, that everything else uses.
Isn't this exactly what Chromebooks do? Please correct me if I'm wrong, but they don't run X-Windows or Wayland, do they? And if they did, then what possible benefit would there be over just running the browser directly on the hardware, with the thinnest possible window management and graphics driver layer?
If you disagree, then tell me what X-Windows or Wayland can do that a modern web browser with bare metal drivers couldn't do much better?
You've got JavaScript and WebAssembly for programming, WebGL and <canvas> and <video> and DHTML and CSS for drawing (including low latency rendering with desynchronized canvases), JSON and XML and ArrayBuffer for data representation, HTTP and WebSockets and WebRTC for networking. What is it missing?
You're catching downvotes (have some goddamned respect people) but I'm facing this choice right now.
I have a project that involves a compiler written in Prolog targeting Prof. Wirth's RISC chip (for Project Oberon[1]), and I want to make it easily accessible.
I can use e.g. TCL/Tk/Tkinter/Python+SWI-Prolog, and make a native app that the user has to install... or...
There is a Prolog in JS[2], and an emulator for the chip in JS[3], and rich widget frameworks (I like Knockout[4], but there are literally dozens in JS), so it's pretty easy to make a SPA that shows off the code (literate programming style) along with live compilation and emulation, and you can even let users save their work[5]. "Installation" is just visiting the page.
I keep trying to come up with reasons NOT to go that route (out of some perhaps-misplaced JS prejudice) and I can't.
> There's really no need for X-Windows or Wayland any more.
Yeah you're looking (in typical nerd fashion, I might add; no pun) to the "optimal" stack for you and your future development workflow. But, you know, there aren't many GUI apps worth running on the Electron/browser stack. I need capable pixel and vector graphic editors, 3D modelling apps, audio editing apps, and tons of other apps for media production which are extremely time-consuming and expensive to create. I don't want to loose what we've achieved in F/OSS, because I don't believe we're going to get replacements for the likes of Inkscape, GIMP, G'Mic, Krita, Blender, etc. in these times of short-lived stacks and SaaS's, ever. And why would we want replacements? These apps have just begun to mature, and are working extremely well for me after 10-20 years of development and a bit of getting used to them.
Maybe not, but the toolkits they're based on will drop X11 support within the next few years. X11 is a technological dead end, NetCraft^WRed Hat confirms it.
Not sure if I understand correctly, but the absence of this is what has prevented Nvidia's proprietary drivers from supporting GPU switching without having to restart the X server session.
It's more an admission of the current state of things. Adam Jackson, the previous release manager, stepped down, decided to switch gears to doing other graphics-related stuff, and nobody has volunteered to make a new X server release.
xorg-devel has always been a relatively slow mailing list, and very little new R&D has been done on the X server for the past 5 years or so. Latest I can think of is the DRM lease work, which is still ongoing.
x.org has always been that way. Slow, hard to get started developing, and not really "sexy". it is critical infrastructure, but nobody wants to maintain it.
Wayland was at least started by people frustrated with the real problems in X. (unlike most other replace X projects I've seen over the last 20 years where were someone who had no idea what is really wrong proposing a solution that didn't solve the real problems.
If it's working, why break it by adding new stuff?
I'm very fond of old software that has been battle tested, rather than jumping on the bandwagon of the newest thing which is invariably rough around the edges.
Just look at the terminal latency of Wayland default compositor vs X, or 3D performance with NVIDIA, or weird configuration - it doesn't work as good as the old thing
Because if "working" was the only thing people ever wanted, we would be using Windows 95 or something. People want "not crap", and everything related to X11/Xorg is a gigantic pile of crap.
> terminal latency of Wayland default compositor vs X
I'm not sure what you mean by "default compositor" — Weston? — and what's "X" — Xorg without any compositor, with the Windows 95-esque situation of everything drawing into one buffer?
Yeah, you can decrease latency by a tiny bit by not compositing at all, but the result is totally unacceptable visually. People generally don't want to look at Windows 95 anymore.
And this is the issue with Wayland vs X11 when it comes to compositing: Wayland forces it down your throat, either you like it or not, whereas X11 enables it but doesn't mandate it so if you dislike it you just don't use it.
(though this isn't the only problem Wayland has... and FWIW Windows 95 is in many fronts superior to any desktop you can find on Linux)
Most users want a UI from $CURRENT_YEAR, not one that makes them think of Spice Girls songs and AOL Instant Messenger.
> Wayland forces it down your throat, either you like it or not, whereas X11 enables it but doesn't mandate it so if you dislike it you just don't use it.
X11 is architecturally ill suited to a compositing environment. Inasmuch as compositing solutions exist, they are janky hacks that introduce more processes, more context switches on the hot path, and more latency than Wayland, which was based on $CURRENT_DECADE graphical principles from the ground up.
Sometimes you can't square the circle and engineer a general solution. Sometimes you have to engineer for the common case only. For desktop usage, the common case is "user who is used to Windows or macOS and does not want to regress backward in terms of UI". For such users, a noncomposited Windows 9x-like desktop is unacceptable. A broken, laggy, hard-to-maintain pile of hacks is also unacceptable. Wayland solves both those problems, which is why virtually ALL of the hackers working on the Linux graphics stack have jumped ship from X to Wayland. Like it or not, you will eventually be using Wayland too.
> Most users want a UI from $CURRENT_YEAR, not one that makes them think of Spice Girls songs and AOL Instant Messenger.
This is rather incorrect assumption. UIs is not something people want, it's something people want to not to get in their way, i.e. the opposite of wanting. And to your example modern Gnome is actually worse than AOL times UIs, it breaks a lot of expectations users used to Mac or Windows have, with proper menus not hidden away and better workflows.
Your priorities are wrong. Latency is the most important metric for doing actual work. This means that the windowing system should allow clients to blurt out pixels whenever they want no matter where the current scanline is positioned.
Apparently there are patches on the way [1] that try to mitigate the severe design flaws of wayland in that regard. If wayland will ever reach X11-like performance remains to be seen. Right now this is not the case.
Partial update is not an improvement for latency in the way you're thinking about it. It's mostly an optimization for tillers so they don't have to rerender some parts.
You seem to broadly misunderstand several parts of the modern display architecture -- the goal you insist any sane window system should provide hasn't been provided by any, including the X Window System.
I think its been a long, long time coming. X11 was effectively left in "hard maintenance mode" in the early 1990s when the Unix vendors abandoned the workstation market. Since then every new system based on *nix has used some other display manager, while Linux desktops used X11 mainly for legacy reasons. At some point 10-15 years ago, Linux vendors (Redhat) revived X.org. But something better has wanted for a long time.
"X11 is the Iran-Contra of graphical user interfaces" as the old unix hater book said decades ago.
It feels like it to me, but my understanding of how things work in this area is pretty limited. I've been using Xubuntu for a long time now, does it currently depend heavily on X.org? I'm having a hard time figuring that out searching the past 20 minutes.
Pretty much every Linux desktop GUI's been on top of the X Window System—which has been synonymous with the Xorg implementation thereof for, oh, 15ish years now—until very recently. Red Hat's behind a totally different replacement for X Window (and so, Xorg) called Wayland, but desktop environments that run on top of X Org will require major work to move them over to it, and most haven't done that yet, if they ever will. Last I checked Wayland's also a bit half baked at this point, from a "does it work with all my stuff and do all the things I'm used to" perspective, so news that it's the only system receiving new feature development means some risk of a fairly awkward transition period.
Notably, I think you still can't get hardware acceleration under Wayland for nvidia cards, making it unusable. Yeah that's nvidia's "fault" but to the end user it makes no difference if they upgrade their distro, the distro switches from Xorg to Wayland, and suddenly their computer sucks.
> you still can't get hardware acceleration under Wayland for nvidia cards, making it unusable
Nvidia wrote patches for GNOME and KDE to support their crappy custom EGLStreams thing alongside the usual GBM. I think GNOME and KDE have accepted them.
Porting XFCE applications (Thunar etc.) to GTK3 indeed gives automatic (or near-automatic) Wayland support. This means that XFCE applications will run natively (without XWayland) in Wayland compositors (such as sway, GNOME or KDE).
However, xfwm (the window manager) will also need to be ported to Wayland — i.e. become a Wayland compositor — which will probably be a more complicated undertaking, as "xfwm-wayland will need to fulfil the role of both X and the window manager. (Yes, using libraries like wlroots or libmir, will make this not completely impossible, but it will still be non-trivial.) Plausibly, XFCE might join forces and use the same Wayland compositor as, say, LXQt or Mate.
I believe so - Xubuntu is based on XFCE, which is an X based environment. I've seen some mutterings that they may eventually write a Wayland backend, but I don't think that it's in the near future
RIP, X11. You had the correct architecture. I still believe that the goals of Wayland would have been better achieved as a collection of X extensions than as an entirely new system that's lost many of the capabilities of X, including network transparency, input configuration, and window manager decoupling. Wayland is great if you want to build a Windows clone that works exactly the way Red Hat wants it to work --- but it represents a decline from the Unix ideal of configurability, experimentation, and simplicity.
The sad reality is that as bad as X is, Wayland is too much of a half-baked replacement to say that it is definitely better. Case in point: it's about 10 years old now and people still say it isn't ready yet!
One thing I'm reminded of is the huge role of Red Hat in maintaining loads of open-source software. They get to choose what direction "linux" is taking by simply having paid developers put in a lot of time writing and maintaining software that everyone then adopts. So Wayland will get adopted by most distributions, just like SystemD because Red Hat will invest in it.
Still waiting for KWin to become functional on Wayland. Some serious bugs prevent it. And where is RedHat rushing exactly? Something like adaptive sync doesn't even work with Wayland compositors yet. So good luck using your fancy adaptive sync high refresh monitor with games in Wayland session.
Even VideoCore 4 with the Mesa vc4 driver supports Wayland — since it's Mesa, of course it implements GBM and whatnot. I've used Weston on an RPi3 with Arch Linux ARM.
For VideoCore 5/6, IIUC the only driver is Mesa v3d, even better situation, of course everything should work, and it will always work since no proprietary crap driver would ever get into people's hands.
If there is something like NX, it was added as an afterthought (last time a Remote Desktop protocol for wayland came up, it was explicitly outside the scope of what wayland was designed to support).
The Unix philosophy has been abandoned at this point. What advantages have we really gained from it? More gaping security flaws? Unnecessary centralization is pushing the old Unix admins who laid the groundwork for Linuxs success into BSDs and other alternatives, and the Linux kernel will likely rot as a result. Could we consider this moment the peak of Linux development?
>Not everyone likes writing shell scripts to start services
Sure, but that isn't a good reason to remove the option to do so for everyone else.
I've been using Linux for 20+ years but the apparent shift from "stable and reliable" to "new and shiny" concerns me. The feeling of being an involuntary beta tester is not a good one.
> that isn't a good reason to remove the option to do so for everyone else
It really is. Accepting the status quo and abandoning progress is how software systems become stagnant and die. X is a bloated protocol, designed for a different age of computers. Sys V is an archaic system that makes the task of having a standard daemon run in a standard way surprisingly difficult (and also prevents parallelising startup). I'm not saying the replacements are perfect, but at least we're trying.
Maintaining perfect backwards compatibility would have an effect on progress somewhere between 'making it harder' and 'making it impossible'. Not that backwards compatibility should be completely abandoned. See XWayland, and also the fact that SystemD does still execute init.d shell scripts.
Progress can be a lack of change, if that change was regression in functionality, q.v. the large and growing number of shitty interfaces.
And by interface I mean everything from boeing's mistake right down to the controls on a fan heater of mine.
I guess I need to explain the latter. I've had a period of joint pain in my hands due to a developing food intolerance (now under control). The fan heater control was a dial, partly flanged for grip. But just a fucking little flange, so as not to stick out too much. Trying to operate that with your digits hurting was a bitch. It would have been literally unusable for someone with strength loss from advanced age and some decent arthritis on top.
Interfaces are a very broad category in my view, and we have far too many bad ones. Anyway, sorry for the rant, but don't confuse change with better.
i want to say this as kindly as possible, but every time i have heard statements to this effect (think "paradigm shift", "new age", etc.), the presumably "better" technology (which more often than not just means newer) is always oversold.
i would reconsider your line of argument, because as someone who has heard this several times throughout my career, this is almost certainly a bellwether of disappointment.
But computers were different in the 80s when X was designed (to talk about one example). We don't use networked terminals to a mainframe, the graphics stack (hardware and software) works completely differently now, user expectation on compositon and visual fidelity has increased, and the security threat model has changed (in that it exists now and never used to).
There has been no one paradigm shift, merely 40 years of incremental advance leading to a different landscape with different requirements.
The mainframe is now a rack of servers, still separated from the framebuffer by a long network connection. The decline of X has us so desperate that we're abusing web browsers as remote displays.
> The decline of X has us so desperate that we're abusing web browsers as remote displays.
There's a fundamental difference between a X11 remote display, and an application running within a web browser. This difference becomes obvious when the network connection drops: the X11 application will immediately freeze, since all of its code was running in the remote computer, while the application within the web browser can retain some of its functionality.
Every new piece of software is not progress, very little of it is actually innovative, none is better in every way and not that much of it is even usable. Having options at least accepts that, respects people's time and allows people to avoid suffering through it until it eventually dies anyway.
Speaking of stagnation, reinventing anything "not invented here" is also stagnation, not progress.
Distro maintainers fall into the following categories (with the accompanying fictitious percentages):
* 50% - maintainers who don't care about systemd
* 35% - maintainers who do care, but who feel powerless to stop the Domino Effect (and don't want to echo the familiar criticisms of it, or be associated with the "haters")
* 10% - maintainers who actually like systemd, because it is optimised for their use cases (at the expense of others)
* 5% - maintainers who can't stand systemd, and leave their distro/OS over it
> Sure, but that isn't a good reason to remove the option to do so for everyone else.
It didn't. You can still write your init scripts.
I'm not a fan of the monolith here, I dislike the whole concept of binary logs, of tight coupling with udev and journald, but seriously - once you get into how the various systemd service and target files hang together it's a (comparative) delight. And they can still trigger your scripts.
It is like coming home one day, and the house has been demolished. There’s a joker in a bulldozer proudly proclaiming there was some dog crap on the sidewalk, and that he took care of it for me.
Solid analogy. In the case of Systemd, it was fixing problems for me that I never had, and creating problems that I'd also never had: It broke laptop sleep. It broke /etc/rc.local. It broke domainless local DNS lookups. It broke boot on one of my servers (fstab's new nofail argument? really?) And it (soft) broke a keyboard of mine. A keyboard! Astonishing.
I really wish Debian had not drunk that particular kool-aid. :'(
I'd go somewhere else but Debian (and by extension Ubuntu) have been doing binary package management the longest, and for me it really shows. I want these boxes to just sit there for years, be extremely low maintenance and only pick up security updates, and I've never gotten a better experience anywhere else.
Out of curiosity what do you prefer in it when compared to Win7?
For me Windows 10 has been a huge step backwards. Settings are spread out in multiple UWP style apps and the classic settings programs. The update scheme is annoying, and Microsoft is pushing out more buggy updates than ever before. Clicking “check for updates” surprise enrolls you into their beta channel. The telemetry is forced and they regularly reset your preferences.
I absolutely hate Win10, and this is coming from a guy who didn’t mind Win8.
I'm not OP, but I much prefer the window management in 10 to 7. You can tile windows by keyboard (you could do this in 7, but it's improved in 10), and resize tiled windows together to keep their relationship to each other. There are virtual desktops without third-party utilities now.
The console is better than it was, but still not good enough to be called adequate.
I wouldn't use it at home, but I'm happier with it at work than with 7.
To be honest, I didn't seriously use Win7 much, so I probably agree with everything you said. I've used 10 a lot more (switched to a Windows-centric company and finally built a gaming rig at home). So maybe I would have liked 7 as well. I do find the settings to be confusing—I assume they're mid-transition on those.
I have also yet to be bitten by a bad update. I'm sure my time will come.
In my experience Windows 7 was clearly the pinnacle so far. The only real new feature since then that I liked was the ability to scroll in windows that aren't in the foreground. In exchange the privacy and update issues basically exploded with Windows 10, to a point where the frequency of my system reinstallations went from "maybe once in the lifetime of the PC" to "about every 6 months". It's the main cause for my full switch to Linux and the number of reasons to not go back is only growing over time.
There's 1 app called Settings. In the current Windows 10 it does nearly everything (the most recent thing they added was battery levels for Bluetooth devices and fonts).
Yes File Explorer needs an update but that's about it.
And for good reason! It's about time, too. The "Unix Philosophy" is intellectually bankrupt. You shouldn't have to fork a process from the shell to multiply two floating point numbers (especially when the CPU can do that in one instruction), or do simple string manipulation. That's ridiculous.
> The "Unix Philosophy" is intellectually bankrupt.
The rest of your post appears to be about shell (presumably Bourne) script, leaving this claim unsupported. "UNIX philosophy" != "writing shell scripts".
> You shouldn't have to fork a process from the shell to multiply two floating point numbers
I can't remember a single time in the >25 years I've been writing Bourne shell (or bash) scripts where I've wanted to "multiply two floating point numbers". I've sent large lists of floating point numbers to various programs for processing, but other languages are far more appropriate for problems that requires any amount of actual calculation (especially floating point calculations). If I did need to multiply two floats for some reason, it would be such a rare event that the few hundred milliseconds "wasted" in spawning bc is utterly irrelevant.
You seem to be complaining about shell by selecting a feature that is not actually needed in common use. What, specifically, were you doing that you needed fast floating point match but needed to use Bourne shell instead of C/Python/whatever?
> fork a process ... do simple string manipulation.
For many types of string manipulation, you don't. Manipulating strings (command lines) is what the shell was designed to do! Sometimes sed/awk/etc is useful for complex tasks, but the shell's variable expansion and other builtins are generally enough for most simple needs.
And if your needs actually are complex, the cost to start another process (which is probably much smaller than you think) is insignificant.
>I can't remember a single time in the >25 years I've been writing Bourne shell (or bash) scripts where I've wanted to "multiply two floating point numbers".
Your attitude about not being able to imagine any reason why you'd ever need to use floating point reminds me of the HP technical support person that Steve Strassmann quoted in his email to the Unix-Haters mailing list on Apr 10, 1991:
>My super 3D graphics, then, runs only on /dev/crt1, and X windows runs only on /dev/crt0. Of course, this means I cannot move my mouse over to the 3d graphics display, but as the HP technical support person said “Why would you ever need to point to something that you’ve drawn in 3D?”
You're blinded by the limitations of your tools. Of course you wouldn't do something that's impossible with the shell, because you know it's impossible, and would switch to another language if you needed to do that. That doesn't mean that you never need to use floating point numbers. It just means you need to switch languages when you need to use them. Which is my point.
And no, the kind of stuff you need to fork sed or awk to do is not "complex". It's simple. It only seems complex because you're writing a shell script, and the backflips you have to do to escape the parameters and fork the processes to do that simple string manipulation is complex, not the string manipulation itself.
I look forward to using your enhanced tools, once they're ubiquitious and fully-featured.
More seriously I can't imagine what "points" you're trying to score. Yes the shell is good at launching external processes, and yes the built-in facilities for other things are lacking in some ways. On the whole people work around them by using perl, python, and other tools.
It might be, as you say, that people resort to using such external scripts/languages because their shells suck. But I have to ask: So what? They get the job done. A "super-shell" doesn't need to be present, even if people do things the "hard-way".
When did I every say or imply that I was writing my own shell? There are already ample real programming languages to use that can fork off and manage sub-processes just fine, thank you, instead of writing ridiculously inefficient complex hard to maintain un-modular shell scripts.
Why would I waste my time trying to fix something that's fundamentally flawed, and that nobody would use because it wasn't "standard"?
We already have Python, JavaScript, Perl, etc. Even PHP is overwhelmingly better and more modular than bash! Why try to put lipstick on a pig like bash?
My point (which I already stated) is that the "Unix Philosophy" in intellectually bankrupt, not that the shell should be extended with string manipulation and floating point to make it unnecessary to fork "sed", "awk" and "bc".
Perhaps you are arguing for a better shell? Which is something I could get behind. bash has always been a bit of a mess. xonsh is one example of a better one, however I am currently using fish. fish has a math and string manipulation builtin.
If you're pushing a solution that's worse than existing practical widely used well supported alternatives, you're wasting everyone's time.
I have already proposed an extremely practical, totally open, standards compliant, easily predictable, inevitable solution. What is your response to my posts about pushing something like Electron down to the bare metal?
It's not as if there aren't already thousands of extremely talented people across hundreds of different companies and organizations, all working towards towards making that possible and efficient. It's not as if the tooling doesn't already exist and isn't widely supported and rapidly evolving.
There is no need for X-Windows, and no need for Wayland, because a web browser running on bare metal could do everything they do, and so much more, in a vastly more flexible, standards compliant, modular, powerful, efficient way.
And it would be a much better platform for implementing a shell, user agent, desktop, window manager, and even a visual programming environment than the half-assed mish-mash Turing Tarpit of shell scripting languages Unix has suffered with for so many decades, which themselves don't even adhere to the so-called "Unix Philosophy".
AND it would be truly extensible (even capable of gasp multiplying floating point numbers and doing non-trivial string manipulation without forking heavy weight processes), which X-Windows and Wayland are not, which means it can be fundamentally much more efficient, by downloading code to implement application specific protocols and local interaction, like NeWS did decades ago, but X-Windows and Wayland foolishly and stubbornly refused to do by design. (It's not like the ideas behind NeWS were unknown to the designers of X-Windows and Wayland -- they just chose to ignore them. And now here we are.)
You've got JavaScript and WebAssembly for programming, WebGL and <canvas> and <video> and DHTML and CSS for drawing (including low latency rendering with desynchronized canvases), JSON and XML and ArrayBuffer for data representation, HTTP and WebSockets and WebRTC for networking. What is it missing?
That design has been well proven and is widely used throughout the industry. It's not a new idea, but kids these days refer to it as "AJAX". There's no need for a bunch of useless layers underneath the web browser any more. It's time for X-Windows and Wayland to fade away into the depths of history.
I saw the Electron mention, but didn’t know what to make of it. What does a “web window system” have to do with unix shell script limitations? I still don’t know.
I find web programming inelegant honestly, with three languages instead of one, a clumsy interpreted programming language that doesn’t multithread. Webasm may help with the last parts, perhaps. What about performance? How to run native code in windows?
At least I’d know how to use it, that’s a big plus. There would need to be some standardization around gui widgets, don’t like how they have to be built from scratch in every project.
Gnome3 uses js and css in its desktop widgets.
Finally Wayland is pretty lean, folks complain it doesn’t do enough already. You’d have to implement a driver and drawing layer anyway, right? So not much is “useless,” perhaps the window manager.
So three languages instead of one is bad for the web, but just fine for bash+awk+sed+expr+bc+grep+egrep+cut+sort+find+etc?
(Plus the other languages you need to switch to and rewrite from scratch once your bash script becomes more than 12 lines long.)
"After a shell script reaches a dozen lines, it's time to port it to Python." -mixmastamyk
Why would anyone in their right mind ever start out with a language that can only handle 11 line scripts?
And are you under the impression that bash is multithreaded? That it has good performance? A just in time compiler? An ahead of time compiler? A module system? A large ecosystem of reusable modules that can plug together and call each other without conflicts? An object oriented programming system? A debugger? An IDE?
Can you compile C++ and other languages into compact BashAssembly code, call it back and forth directly from Bash, and run it really fast across all platforms?
In case you weren't aware, a hell of a lot of people write JavaScript code that runs in node.js, instead of bash scripts, and are very happy with it. And of course Electron can run any code that runs in node.
So yes, running JavaScript in node or Electron is a fine alternative to running bash scripts. It can even handle file names with space in them without shitting itself! What rocket science! And you don't even have to rewrite your scripts in another language once they reach 12 lines -- what a relief!
What I was describing that you didn't understand is implementing a visual programming shell in Electron. And if you prefer a text command line shell with a different syntax than plain JavaScript, then simply implement that syntax and the command line interpreter in JavaScript itself, so you have the complete power of JavaScript and all its libraries to draw from (including extravagant luxuries you're not used to, and couldn't imagine you'll ever need, like floating point math and string manipulation), and you can import and call any JavaScript module. That blows bash out of the water.
You say you're not even aware that there are libraries of user interface widgets for JavaScript, and falsely claim they have to be "built from scratch in every project". But there are many, some of them quite good.
So have you ever even heard of or used the Chrome debugger JavaScript console window, or have any clue what high power integrated convenience and debugging features it supports?
How does that compare to your favorite bash integrated development environment and debugger? Care to tell me which one you use, and link me to a tutorial that shows how much better it is than the Chrome debugger?
Or do you firmly believe that ALL people who write bash scripts NEVER need to debug them, set breakpoints, catch errors, examine stack traces and scopes, single step through code, browse the values of variables and data structures at runtime, just like they never need to use floating point or do non-trivial string manipulation?
I admit that might be the case if you always rewrite your bash scripts in Python once they reach a dozen lines, as you say. But in the real world, most bash scripts are much longer than that, and often suffer from lots of bugs.
You sound like one of those people who complains about hating Lisp because it has too many parenthesis, then goes and writes code in bash or perl with so much punctuation and different kinds of parens and brackets and braces and single letter abbreviations and syntactic syrup of ipecac, that it looks like you lifted the phone out of your 300 baud modem's acoustic coupler and shouted into it.
As I said, you're blind to the problems and limitations of your favorite tools, and hypocritical in your criticisms of better tools.
The preface of the Unix Haters handbook perfectly describes the problem you're suffering from:
“I liken starting one’s computing career with Unix, say as an undergraduate, to being born in East Africa. It is intolerably hot, your
body is covered with lice and flies, you are malnourished and you
suffer from numerous curable diseases. But, as far as young East
Africans can tell, this is simply the natural condition and they live
within it. By the time they find out differently, it is too late. They
already think that the writing of shell scripts is a natural act.”
— Ken Pier, Xerox PARC
My favorite tools are Python and fish, so your insults about the others don't really hurt. I almost never use bash because I don't care for it either. Those other Unix command-line tools are arcane too, but they have a mostly compatible syntax. Don't use many of them besides a grep alternative and an occasional sort perhaps.
There are languages optimized for interactive use, and others for writing batch scripts and programs. Fish is great for interactivity, Python not so much. So they compliment each other quite well. There have been a few attempts to do both (xonsh/iPython), but none have been big hits yet.
Performance is important for long-running and/or computationally expensive apps, which are typically not interactive. When I brought it up, was thinking of Gimp filters, not "ps -ef | grep FOO".
Javascript is probably good enough these days. If you want to create a "jsh" interactive shell with the good parts have at it. Believe it is doable by a single person in a reasonable time. I'd guess a few might already exists and could use contributors.
Re: widgets, meant that few are standardized/built into the browser already and have to be downloaded, not that none exist. A silly situation (as someone who wrote GUI apps in a single language in the 90s), yet could be rectified.
But in thread about GUI Window systems you are going on and on about the limits of bash, which is why your arguments are not well received. It's really neither here nor there regarding Wayland. Shell scripting hasn't held me back since '98 or so, if it ever did. It's like complaining about vi when Sublime Text and VSC exists.
I get that you appreciate good software design, me too. But "worse is better" is the reality we live in. Things get better, eventually.
> It just means you need to switch languages when you need to use them. Which is my point.
So, what's the problem with that? I read it almost like "that doesn't mean that you never need to retrieve data, it just means you need to switch to SQL". People develop DSLs for a reason.
Not sure I agree with you there: should it not fork a process to run a heavy task? Should it not fork a process for each end of a pipe, especially considering we're deep into the multicore era of processors?
The fact that the basic shell could use more builtins instead of delegating everything to external processes (like calculating a simple expression) is totally unrelated to the Unix Philosophy being bankrupt.
If fork and processes weren't so expensive, I still think multiple processes are the best way to parallelise on multicore CPUs. In fact, there's a resurgence in programming environments based on multiple communicating, yet isolated, processes, see Erlang/Elixir.
Though I definitely would like to see someone seriously exploring a different paradigm than UNIX.
EDIT: Unix philosophy also is "everything is a file", and what is a file exactly? A bag of bytes. Everything we deal with is a bag of bytes if you think about it, so I find it perfectly fitting to model our world that way.
But then again, the current construct to represent this idea such as labels organised in directories, might be a bit long in the tooth and we're due for some paradigm shift on this front as well, but I digress.
"Everything is a file". Except ioctl. Except mmap. Except the shell makes it a huge pain in the ass to hook up arbitrary graphs of streams between processes.
My point is that ioctl and mmap disprove the "Unix Philosophy" idea that files are one-dimensional streams of characters that are all you need to plug programs together.
Everything as a file is an abstraction, an enormously useful abstraction. The point isn't that it has to behave like a file in every way. At the C level, the fact that open(), read(), write(), or e/poll() behave the similarly is extremely helpful.
I can have a whole suite of functions that do not care what type of file descriptor they hold, but just that those functions behave in a predictable way. Maybe only 10% of my code has to care about the differences between them, instead of 70%.
BSD Sockets were never part of "UNIX Philosophy", they are an ugly quick'n'dirty port/rewrite of code from totally alien system (TOPS-20) because DoD had to deal with vendor abandonment
And in the 35 years since, despite multiple new kernels and userlands and languages, a more unix-file-like socket API hasn't become popular. I'm not sure what does that tells us, but it's not nothing.
Interestingly enough, Go's network functions, by virtue of being native to Plan9, actually are based around Plan9's file-based network API. It works pretty nicely, though "everything is a file stream" has its issues.
Government, Politics, resistance to change, NIH syndrome, "us vs them" and a bunch of other issues all conspired to keep BSD Sockets alive.
The first is the origin of BSD Sockets at all - UCB got paid by the government to port TCP/IP to Unix and, AFAIK, provide it royalty-free to everyone, because DoD needed a replacement for TOPS-20 as fast as possible and widely available, and there were tons of new Unix machines on anything that could get paging up (and some that couldn't).
Then you have the part where TLI/XTI ends up in Unix Wars, associated with slow implementations (STREAMS), despite being superior interface. NIH syndrome also struck in IETF which left us with the issues in IPv6 development and defensiveness against better interfaces than BSD Sockets because those tended to be associated with the "evil enemy OSI" (a polish joke gets lost there, due to "osi" easily being turned into "axis").
Finally, you have a slapped together port of some features from XTI that forms "getaddrinfo", but doesn't get much use for years after introduction (1998) so when you learn programming BSD Sockets years later you still need to manually handle IPv4/IPv6 because no one is passing knowledge around.
What new kernels? The three major OSes are two UNIX based, and one doing its own thing.
I don't think it's a great investment in time redesigning the whole socket API since you need to keep the old one around, unless you want to lose 35 years of source code compatibility.
The BSD socket API can definitely and easily be improved and redesigned, if only there was some new players in the field that wanted to drop POSIX and C compatibility and try something new.
> What new kernels? The three major OSes are two UNIX based, and one doing its own thing.
I meant that many new and forked UNIX-like kernels have been written over the past 35 years. Just the successful ones (for their time) include at least three BSDs, OSX, several commercial UNIXes (AIX, IRIX, Solaris, UnixWare...), and many others I'm unfamiliar with (e.g. embedded ones).
It's common to add new and better APIs while keeping old ones for compatibility. Sockets are a userland API, so if everyone had adopted a different one 20 years ago, the original API could probably be implemented in a userland shim with only a small performance hit.
However, a new file-based paradigm from sockets would probably work better with kernel involvement; that's why I mentioned kernels. We've seen many experiments in pretty much every other important API straddling kernel and userland. IPC, device management, filesystems, async/concurrent IO, you name it. Some succeeded, others failed. Why are there no widely successful, file-based alternatives to BSD sockets? The only one I know firsthand is /dev/tcp and that's a Bash internal.
If none of the Unix shells are remotely compliant with the "Unix Philosophy", then what is the "Unix Philosophy" other than inaccurate after the fact rationalization and bullshit?
Name one shell that's "Unix Philosophy Compliant". To the extent that they do comply to the "Unix Philosophy" by omitting crucial features like floating point and practical string manipulation, they actually SUFFER and INTRODUCE unnecessary complexity and enormous overhead.
> "If none of the Unix shells are remotely compliant with the "Unix Philosophy", then what is the "Unix Philosophy" other than inaccurate after the fact rationalization and bullshit?"
Precisely that! It's post-hoc rationalization and bullshit, that's the point I was driving at.
I'm talking about the shell that actually exists, not some hypothetical shell that doesn't exist and never will. You can't defend the "Unix Philosophy" by proposing a non-standard MONOLITHIC shell be universally adopted that totally goes against what the Unix Philosophy explicitly states: "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
At least Apple figured out that floating point was useful early enough to replace INTEGER BASIC with APPLESOFT.
Forking "bc" from "bash" is the canonical way to multiply two floating point numbers in Unix, according to the "Unix Philosophy".
You too are blinded by the limitations of your tools. Of course the shell performs all kinds of computations. Just not a useful enough set of them that you don't still need "bc" and "awk" and "sed" and "grep" and many other ridiculous "Unix Philosophy Compliant" tools.
I complain my screwdriver sucks when it's made of plastic, and it's only good for driving plastic screws into styrofoam.
chsh - change login shell
DESCRIPTION
The chsh command changes the user login shell.
If you don't like the features of your shell, use a different shell.
> you don't still need ... "awk" and "sed" and "grep"
I don't need those programs - there are many other ways to solve most problems. I want to use them because they are both fast to start and execute, and make development time extremely fast.
> You too are blinded by the limitations of your tools.
You appear to be trying to use the shell for tasks that that should be done in a different language. You also don't seem to understand that some of use use the shell because discovered it was a much faster and easier to use tool for some problems than the other methods we've tried. For some tasks, complaining about the overhead to do some things like floating point or trivialities like the <100ms it takes to fork and run bc is a premature optimization.
If you don't find shell to be a useful tool for the problems you need to solve, that's fine; use whatever tool works best for you. We might be trying to solve different problems. Just because you don't like a particular tool doesn't mean it's "ridiculous".
What all those "ridiculous" tools allow, among other things, is:
- Cheap parallelism. Things as easy as `grep | cut | sort` will utilize multiple cores despite shell and all the tools it spawns being single-threaded. I can't imagine threading primitives in some supercharged version of bash without some ugly syntax and a number of additional questions about deadlocks on StackOverflow. That would not be bash, but some ugly Python parody instead. (Also note that even PowerShell seems to limit itself to job control primitives despite being more capable .NET-backed language: [1].) That usually saves me much more than ability of zsh to multiply floating point numbers without forking.
- (Inter)changeability. Want to multiply complex numbers? Use another calc. Want ridiculous regexes? pcregrep. Want to scan large hard disk efficiently? ffcnt [2]. Want to do something else, and to do it efficiently? Write a program without spending a time to learn your shell's C API [3], just read stdin or parse argv. Yeah, I too wish to not spend much time serializing and deserializing data in each and every pipe, but that would require not a new shell (we already have PowerShell) but, I guess, an entirely new OS as well as ecosystem designed just for that OS, which nobody would spend much time adapting to at this time.
[3] Really, who got time to spend on something like https://docs.python.org/2/c-api/index.html (I know, Cython exists, but there's no Cython for every language out there.)
(edit: formatting; replaced platter-walk (which is a library) with ffcnt (which uses said library))
The idea is that the tools are meant to be limited to avoid complexity (this is why bash, like most GNU tools, isn't really a good example of unix philosophy - it is too complex) and instead rely on composition.
If you browse through GNU documents, you sometimes come across strange ancient passages like
> "Avoid arbitrary limits on the length or number of any data structure, including file names, lines, files, and symbols, by allocating all data structures dynamically. In most Unix utilities, “long lines are silently truncated”. This is not acceptable in a GNU utility. "
What do you suppose the ancients meant by this? Some have taken it as evidence that in ancient times before GNU, Unix tools were not in fact "doing one thing and doing it well" as the Unix priests so often claim. This of course is blasphemy and the guilty are punished accordingly... but are they wrong?
As if the bash syntax for simple string manipulation and piping and quoting and escaping and even determining if a string is empty wasn't unnecessarily complex.
As I said, the "Unix Philosophy" of plugging together artificially limited tools by pipes is intellectually bankrupt (and terribly inefficient). That was my whole point.
It is inefficient, but that doesn't matter much for interactive use. After a shell script reaches a dozen lines, it's time to port it to Python. If the job needs absolute performance you rewrite in a performant language instead.
I'm not sure why you are upset to be honest? What design would you prefer? This one seems to hit all use cases.
In another thread I mention that fish has math and string manipulation builtins.
Plenty of Unix shells support floating point. For example, the conveniently-named ksh93 was released in 1993, and was / is available on a wide range of systems. I’d guess the hipster-compliant fish shell can do these things too, but can’t be bothered to RTFM.
I think the takeaway is that floating point shell use cases are so vanishingly rare that no one bothers to switch shells over it.
Let’s compare the alternatives to forking for a floating point op (ignoring the fact that it would be easy to add this to the shell if it were common enough):
The Unix way pays to fork a process for a floating point operation in shell, then delegates the expensive work to something written in an appropriate (probably compiled, maybe vectorized/gpu optimized) language. Forking the process took 10-100s of usec. If the hard work is comparable to this, the prompt returns immediately, so it is fast enough. If the hard work is non-trivial, the fork for the floating point op is << 1% of the runtime. Either way, the fork is effectively free.
The “modern” ways (which Unix mostly killed off in the 70’s, but that are currently in revival), are to write the hard work in a high-level, and usually interpreted language or write the process control logic in a lower level language.
In the case of the high level language, the hard work is now ~2-10x slower than unoptimized logic in a compiled language, so you spend ~50-90% of your time in useless overhead.
If you move the process control to a low level language, then it is more verbose, and you also have to recompile (or jit) the bulk of the program, which is much more expensive than forking a process. It likely takes 10s or even 10,000s of milliseconds!
There is a reason Unix won back when machines were orders of magnitude more expensive than they are now. (And why is it still more usable than its competitors today.)
The point of processes is to be meaningful protection boundaries, especially post Spectre and Meltdown. Maybe you shouldn't have to spawn a separate process to do a simple calculation, but spawning processes had better be as cheap as possible, and things you do from the shell will (in principle if not always in practice) be limited by the speed of human input, so the overhead of a separate process just won't matter. The Unix philosophy gives you these things, together with a reasonably-efficient common interface (byte streams) which can be used to combine independent programs in order to accomplish more complex tasks.
Think about how many orders of magnitude more instructions and system calls it takes to fork a process, open a new pipe, write bytes to the pipe, context switch to the new process, start it up, read bytes from the pipe, multiple two numbers, write bytes to the pipe, terminate the process, context switch back to the original process, read bytes from the pipe, and close the pipe (no matter how efficient forking and context switching and making system calls is), than it takes to execute a single floating point "fmul" instruction.
If you are doing this for a bunch of images once in a while it is meaningless to even care about the performance implications.
If you are doing this often enough where such computation affects the throughput of your application, then you are using the wrong approach - computation is not meant to be done via the shell or shell scripts, these are meant to be used to tie together applications. That you can use bc for computations is just a convenience the overall design enables you, but that doesn't mean it is how you should use it.
>"computation is not meant to be done via the shell or shell scripts"
You are using the passive voice, which is useful when you want to obscure the subject of the sentence. So WHO does not mean for you to perform floating point calculations and string manipulation with the shell?
The shell is a human designed artifact. God did not create the category of shell scripting languages and declare that nobody should ever use them perform floating point or string manipulation.
Windows Powershell isn't ridiculously crippled like bash is. It's still a shell. That disproves your point that there is some universal definition of a shell that precludes it being useful for general purpose programming tasks.
I didn't claim that there is a universal definition of shell. My comment is about how these tools are meant to be used and designed as to follow unix philosophy (btw Bash does not follow the unix philosophy as it is a very complicated program that tries to do way more than it needs to).
If the Unix Philosophy is a good thing or not is another matter, but to judge that you first have to understand it.
If you don't mean there's a universal definition of what a shell should and should not do, then WHO do you mean proclaimed that "computation is not meant to be done via the shell or shell scripts"?
Just because Eric Raymond writes something in a book doesn't mean it's true.
I've been using Unix for at least 37 years, and other operating systems before that, so I think I have enough experience and understanding to judge the "Unix Philosophy" myself, instead of simply believing without question and regurgitating everything Eric Raymond writes. (And unlike him, I also don't believe black people are violent and stupid, or that white people's long term objective must be to break, crush and eventually destroy Islamic culture, either!)
> If you don't mean there's a universal definition of what a shell should and should not do, then WHO do you mean proclaimed that "computation is not meant to be done via the shell or shell scripts"?
One doesn't follow the other, the "computation is not meant to be done via the shell or shell scripts" is a statement i made and that was under the context of the Unix Philosophy.
Also you are focusing on the wrong thing here.
> Just because Eric Raymond writes something in a book doesn't mean it's true.
I haven't read any of his books nor i see what he has to do with it.
> I have enough experience to judge the "Unix Philosophy" myself
Then why are you using floating point computation with a bash script as an example for where the Unix Philosophy is bad? Bash doesn't follow the Unix Philosophy, shell scripts aren't even meant to be used for floating point computation when you consider it, instead you should either use a tool dedicated to such computations or to the specific computation you want to perform.
There are issues with the Unix Philosophy, like the simplicity it boasts only exists at a micro/local scale but it disappears at a macro/global scale - combining a few simple tools ends up with the combined complexity of all those tools plus the complexity inherent in the communication of them. To use these simple tools effectively you need to know all of them, knowledge of a single tool - despite how simple it might be by itself - is not very useful if it can only do one thing because an application almost never needs to do one thing.
This isn't a matter of performance, since nobody ever claimed that following the Unix Philosophy would be fast - but many do claim that it is simple.
>shell scripts aren't even meant to be used for floating point computation
There you go with the passive voice again. I ask you yet again: WHO didn't mean for shell scripts to be used for floating point computation? Citations, please?
>I haven't read any of his books nor i see what he has to do with it.
He wrote "The [Brainf]Art of Unix Programming". Since you claimed to understand the "Unix Philosophy", I assume you would have at least heard of the book written by one of the self-proclaimed experts on the topic, which has a whole section including 17 rules in the Wikipedia page on "Unix Philosophy". How long have you been philosophizing away about Unix anyway? Have you seriously never heard of that book?
Bash is no true Scotsman, huh? Bash doesn't follow the Unix Philosophy. Csh doesn't follow the Unix Philosophy. Sh doesn't follow the Unix Philosophy. Yet they are all historically the predominant shell scripting languages of the Unix versions of their time. Can you name ONE popular Unix shell that DOES actually follow the Unix Philosophy? Or does the Unix Philosophy not apply to shells? And who said so?
The "Unix Philosophy" is after-the fact rationalization and bullshit. If the main way to program it is shell scripting, but none of the official shells actually follow the Unix Philosophy, then where does that leave you?
The Unix philosophy is not about any of this. In fact X.org is an example of software running on a UNIX environment that really doesn't embrace the philosophy. It's not small, and it doesn't really do a minimal task well and compose them. It's a monolithic interface, and it works because it's been around for a long time. OpenGL or DirectX are other examples of interfaces that aren't really UNIX like.
In comes Wayland, with solutions to many outstanding issues, and design decisions. But it's new, bleeding new still.
When I think about UNIX philosophy I think of `ps -ef | rg fire | cut -d ' ' -f 2 | sort | sed '/^$/d'`, and the push towards micro kernels and stuff like that.
My favorite "GUI" is just text. The shell interface is, for me, the best [1]. It's very rarely ambiguous, and it works well for many people, even people without vision.
Can you even disable desktop composition in Wayland? One of the reasons I hate Windows is that I'm forced to use composition, which provides nothing to me.
If visual perfection is nothing to you, if you love screen tearing, slow window redraw when moving, Windows 95-esque window trails, etc. — keep using Xorg.
Wayland is a protocol fully designed around composition. Clients have their own buffers, the server composites them. There is no way to draw directly to the screen, because we're not in the 90s with 640K of RAM and there's no reason whatsoever to implement the crappy way of rendering windows.
Forced composition and vsync is a mistake for gaming.
The added output latency is unacceptable, especially for first-person shooters. A little tearing is nothing compared to the vsync lag.
If Wayland wants to replace X.org, then it should support also this use case. But full composition being mandatory isn't very encouraging in regard to this.
Games still can render more FPS than mandated by vsync.
I tried playing sauerbraten (with SDL2) on sway other day, it was butter smooth (no tearing) with vsync off in game, and I felt no input lag unlike when I switch vsync flag on in game which limits FPS.
It probably does triple buffering, but somehow on sway it worked better than triple buffering of intel's xorg driver back when I tried that.
Composition forces latency tied to the refresh rate of the composited output and is the only reason i do not like it and disable it where possible. For me this latency makes Wayland imperfect.
I do not love screen tearing, i just do not mind it at all unless i am watching a movie (where i can enable vsync in the player).
Slow window redraw when moving is something i haven't seen since i had a 386. Even my Pentium could blit windows around.
Windows 95-esque window trails are only a thing if the process associated with the window is stuck. Note, btw, that there is nothing that forbids the X server to cache such windows if it detects that the client doesn't respond to messages after a while - which btw is what Windows does since XP. It is just that nobody implemented it.
> Wayland is a protocol fully designed around composition.
Which was a mistake.
> Clients have their own buffers
Which was a mistake.
> the server composites them
At some other point after the client has marked its window as being updated, meaning that you have around two frames of latency (first frame is your input being sent to the application while the application is drawing itself, meaning that the input will be processed later so the response to your input is a frame late and second frame is the application notifying the window server that the window is outdated while the window server is drawing the output, meaning that the new contents will be used in the next frame).
> There is no way to draw directly to the screen
Which was a mistake.
> because we're not in the 90s with 640K of RAM
If 640KB of RAM didn't limit being able to have direct access to the screen and fast response times, 640GB of RAM shouldn't either. The new design is just misguided garbage that has nothing to do with available resources and everything to do with developers not giving two thoughts about uses outside of their own (note: X11 allows you to have both composited and non-composited output, so people who like composition can use it as can people who dislike it, Wayland forces composited output so people who dislike composition cannot use it).
> there's no reason whatsoever to implement the crappy way of rendering windows
Yes, wasting resources with every application having to maintain their own buffer for each window's contents even though those contents will not change for the vast majority of the window's lifetime for most windows is crappy. Though that is only a minor reason for why Wayland sucks.
> At some other point after the client has marked its window as being updated, meaning that you have around two frames of latency (first frame is your input being sent to the application while the application is drawing itself, meaning that the input will be processed later so the response to your input is a frame late and second frame is the application notifying the window server that the window is outdated while the window server is drawing the output, meaning that the new contents will be used in the next frame).
Not every action takes an entire frame. The timeline can easily be like this:
0ms: old frame starts to output
4ms: input happens, application wakes up
9ms: application finishes rendering
15ms: compositing happens
16ms: new frame starts to output
There's nothing about compositing that requires it to add any significant lag on top of rendering time plus transmit time. If input shows up while you're already drawing? That could happen without compositing. Just draw again or do whatever you'd do without compositing.
> Yes, wasting resources with every application having to maintain their own buffer for each window's contents even though those contents will not change for the vast majority of the window's lifetime for most windows is crappy.
Why waste time redrawing it if it's not changing? And a full-screen window on 1080p is only using 6MB for that buffer. Compared to the processes I'm running, the frame buffers are quite lightweight.
I thought freesync support in modern GPUs meant that they weren't restricted to a fixed refresh rate anymore, and that pixels would then just be "composited" and appear on screen as quickly as possible regardless of their on-screen positioning. Then by using a single fixed buffer that covers the whole screen, this essentially gives you the equivalent to "compositing being disabled".
The only way to avoid the composition latency is to have the entire composition be done when the GPU is sending the final image to the monitor (note that performing composition using the GPU via OpenGL or whatever is not the same thing even if both are done using the GPU), pretty much like what "hardware mouse cursor" gives you. This would require GPUs to support an arbitrary number of transformable overlapping surfaces (where a single surface=a single toplevel window) and applications being able to draw to these surfaces directly without any form of intermediate buffering (which is important to avoid the initial late frame latency).
Now, it isn't like it is impossible to make a GPU with this sort of functionality since GPUs already do some form of composition already, but AFAIK there isn't any GPU currently on the market that can do all the above. At best you get a few hardcoded planes so you can implement overlays for fullscreen content.
And of course none of the above mean that you have to use Wayland, the X server could perform toplevel window composition itself just fine.
Who even moves windows around on a desktop? Surely everyone is using tiling window managers these days, instead of relying on that sort of nonsense? No? Oh well, I'll keep hoping for tiling to become mainstream.
I personally don't consider it just a small improvement. Tearing drives me crazy and I'm comfortable sacrificing a frame to make sure it never happens.
However, I will admit that Crinus makes some compelling points.
I should note (i already mention it in my other post but it is in a little bit in a sea of words) that disliking tearing and wanting to run a desktop environment free of it is perfectly fine and X11 does allow for that - there is nothing in it (considering extensions too) that prevents such use and if there are bugs then these are bugs that can be fixed.
My issue with Wayland when it comes to composition is that it forces it whereas in contrast X11 just enables it - but doesn't force it.
It is the good old "having options" argument which, for some reason, always comes up when GNOME and Red Hat projects are involved.
(just as a note, composition isn't my only issue with Wayland, i have other issues with it being short-sighted and needlessly limited, but these are outside the current discussion)
The wording of the parent comment - specifically "Wayland is a protocol fully designed around composition [...] There is no way to draw directly to the screen" - made me think that this is a case originally of idealism, that when seen to be naive, presses on into diseased dogmatism. Some people decide that for example 'composition' is the be all and end all, and all other aspects are secondary to it, or less. Therefore if useful functionality is dropped, so be it.
Wayland developers specifically prioritize the "every frame is perfect" paradigm over performance and latency. This means there will never be an option to disable things like composition or vsync.
Also, even though it is always claimed that "X11 a is messy conglomerate of tacked on technologies and extensions" the Wayland protocol is extremely complex despite severely lacking features. And because of the strange priorities it has worse performance than X11 even for native apps. The self proclaimed "minimalist" wlroots library has more than 50000 LOC, all for moving around a bunch of overlapping windows? A bit much.
Really? I can run any app with WAYLAND_DEBUG=1 and understand every message easily.
> because of the strange priorities it has worse performance than X11
If the performance that matters to you is the tiny bit of latency caused by vsync, keep using Xorg, or Windows 95, or whatever.
Wayland is inherently much faster because it's asynchronous and doesn't have anything in between the compositor and the clients (e.g.: app <-> Xorg <-> Compiz — xorg is just a glorified message broker!).
> wlroots library has more than 50000 LOC, all for moving around a bunch of overlapping windows? A bit much
Minus 8.5k for examples, minus 7k for the big example (rootston). It's not just moving windows around. Input is inherently complex, and it supports touchscreens, touchpad gestures, drawing tablets, virtual keyboards, pointer locking (moving the camera with the mouse in first person videogames).. Also, it implements multiple backends — running on KMS/DRM, nested on Wayland and X11, and as an RDP server. Considering that it's all in C, that's a tiny number of lines. More importantly than silly metrics, it's a modern, easy to get into codebase.
How much does Xorg have, with its five or however many input systems, multiple legacy ways of direct rendering, and whatever other crap it's accumulated?
> The self proclaimed "minimalist" wlroots library has more than 50000 LOC, all for moving around a bunch of overlapping windows?
wlroots is most of a display compositor implementation, so you should be comparing it to all of the X libraries used in a compositor, and of course the server itself. I suspect the combination would easily exceed 1 million lines.
> Wayland developers specifically prioritize the "every frame is perfect" paradigm over performance and latency. This means there will never be an option to disable things like composition or vsync.
Sorry if I'm misunderstanding, but isn't this terrible? Won't it mean that video games are going to be completely unplayable on Wayland desktops?
I've never been able to get acceptable performance in a game on Wayland, but I'd always assumed that this was just because it was a work in progress and the game makers didn't test to ensure proper performance on anything but X.
> Won't it mean that video games are going to be completely unplayable
Even on my x.org desktop without compositing or any other source of extra frame delay, just turning on vsync in-game utterly destroys my ability to play Super Hexxagon[1]. If Wayland is adding additional frames of latency, I'll have to add that game (and probably any other rhythm game) to the list of reasons I cannot use Wayland.
Yes, actually Super Hexagon was one of the games I play and was thinking of when I made my comment. I find it difficult enough to play with my older computer using Optimus, but when I tried it on Wayland there was at least another 100 ms of latency. It was unplayable to the point I feel pretty sure even a TAS couldn't keep it going for long. Compositing everything with no way for apps to bypass it when necessary is profoundly stupid.
Game developers today already have to account for significant latency in display hardware, and Windows's display stack has been compositor based since Vista. So while I'm sure many of them are annoyed by it, overall it doesn't seem to be that big a deal.
This is mainly an issue if you are running windowed or borderless windowed games and under Windows there is latency too.
It is a shame that playing a game on a window is given such a low priority as personally i often prefer to do that for short term sessions where i'm waiting for something (email, some task or just pass time). Though under Windows with how busy the desktop environment often is it can be distracting. But on Linux, especially with a tiled window manager where you can have a tiny status bar above/below the window and perhaps a stripe here or there with stuff, it is perfectly normal to want to play a game in a window as opposed to fullscreen.
X11 applications are able to directly render pixels on the screen via OpenGL. In Wayland those pixels are always rendered to a buffer which will then be composed on the screen.
This doesn't mean games will be "unplayable". It just means that the performance will depend on the compositor. And since the compositor also includes functionality originally provided by window managers you might get into a situation where you can't use you tiling or whatever desktop for gaming and have to switch compositors for different applications with different priorities.
Note that Wayland introduces latency between input (you doing something) and output (seeing the result of your action), see my other posts for detail. While this latency wont make a game unplayable, it will make the experience worse for fast paced games (especially games where you control the camera directly with the mouse). However this is an issue if you are running a game in a window, the compositor should get out of the way when running fullscreen (or at least i hope it does, but that is the same with X11, some compositors see a fullscreen window and consider it perfectly fine to keep going on).
Do you actually have benchmarks and evidence showing that Wayland introduces latency over X11, or is it just a guess based on how you think Wayland works?
I mean, sure it might in some cases where you can use X11 to "directly" draw stuff on the screen, but most games would be double-buffering or triple-buffering anyway. I don't really see why Wayland would add any latency there, considering all the compositing is going to happen on the GPU anyway.
See my other posts where i explain why that would be the issue as well as how it could be addressed, but that would require hardware that AFAIK doesn't exist.
I do not have any benchmark as the only way i can think of for performing a proper benchmark that measures input-to-output latency would be using something like rigging a mouse to a robotic hand (or something like that) and capturing the screen output with a very high speed (e.g. 1000fps) camera. Sadly i do not have the necessary equipment for doing that.
The Wayland protocol is actually very simple, much simpler than X. And consider that wlroots+sway is around 100,000 LOC, but it's at least as capable as i3+xorg - which is a million lines of code.
Really claiming that it's at least as capable as xorg seems like a complete misrepresentation. Aren't some of the most common and enduring complaints about Wayland about missing features compared to X?
Uninformed and mostly outdated complaints, generally. What features are left unturned are in progress and have most of the pieces in place, and their realization will not add enough lines of code to come close to Xorg.
> there will never be an option to disable things like composition or vsync
Wait, is there a coupling between composition and vsync? I do enjoy playing games on linux, and forcing vsync enabled is an absolute non starter for me.
There is no hard dependency between composition and vsync. You can disable composition with vsync enabled, or you can have composition with novsync. (in practice they tend to be highly correlated)
However, Wayland supports neither disabling vsync nor disabling composition. Moreover, it was architected this way; there's not really any hope of disabling the compositor. Freesync looks kinda sorta like vsync if you squint at it right, so maybe there's hope for freesync in the distant future. But right now, Wayland does not support freesync, nor are there plans to implement it. Gaming on Wayland is 'lol nope' at this point, and will remain that way for the forseeable future.
Meanwhile, Windows has become an acceptable Unix. (I'm not trolling.) I don't want to admit this, but I've been wondering whether, over the long term, FOSS stuff really can't match the quality of a sustained commercial endeavor.
For me, "acceptable" implies full control over updates, among other things. In that regard Windows only got worse.
Their system's update behaviour would only be justified if the updates were absolutely perfect on all devices. They aren't, not even on their own Surface hardware.
And after one of the updates started without even giving me time to close files and then proceeded to completely wipe my system drive I can't see me using Windows ever again unless drastic changes are made.
I fail to see the supposed "quality of a sustained commercial endeavor" here.
An update wiped out your system drive? Sounds very unlikely. There are millions of Windows 10 PCs if something like that happened it would be major news.
That's terrible, I thought that Microsoft had a somewhat better QA program. Though to be fair that is not what i would call wiping out the system drive, because you are still able to boot into Windows and similar things do happen on other operating systems.
> I thought that Microsoft had a somewhat better QA program.
They did. In 2014, they eliminated their extensive and effective separate testing positions as part of a large layoff/restructuring. Supposedly, testing was going to be consolidated in engineering, and not eliminated, but the proof is in the pudding: Windows releases since then have had big problems that feel like they should have been caught and fixed before release.
It literally happens all the damn time, and it does make the news. Microsoft has had to recall several update roll outs on Windows 10 because they can't seem to make them stop breaking shit.
It was news, 3 years ago.
They also had problems of updates causing boot loops which one time affected some of their own Surface Pro line. Some Windows 10 updates were delayed for months because they crashed computers and destroyed installations.
But it doesn't make the computer literally explode so nobody cares if some random users have to pay a repair shop to restore files and get it running again.
They probably weren't kicked out of what they were doing or lost work as a result. MacOS is comparatively much better-behaved when it comes to update behavior.
We don't have "IT remote management tooling". I would be surprised if it happened for Macs, since it's one of those things that can lead to work being lost and it's incredibly stupid.
Also the situation I described happens for personal computers outside of remote corporate control.
When Windows updates, you get a dialog saying that the computer will restart soon and if you're not there to stop it from doing that, it will force close all apps.
One side effect is that you can't leave it alone to process something and heavens forbid leaving it on as a personal server for you to connect remotely to it because it will shut down.
And to make matters worse, Microsoft is pushing those updates frequently and depending on whether you got the shitty version or the expensive version, you can't disable this behavior.
Fetched 1,321 kB in 1s (1,054 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
Back in the NT days, had Microsoft actually invested into their POSIX subsystem, instead of having it around for DoD contracts, Linux would never had taken off.
As proven by those that buy Apple to develop for Linux, a large majority only cares about having a POSIX like experience around.
Back in the NT days, there really was a religious war of Microsoft vs everyone else. They really were seen as the evil empire, using anti-competitive behavior to force crap down everyone's throats. A lot of people felt that way, including me.
For me it was easy to buy an Apple to develop for Linux. Why? It was Unix, and it wasn't Microsoft. If Microsoft had sold the exact same laptop with exactly the same operating system, I would have refused to buy it. Remember the motto Embrace, Extend, Extinguish? See https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis... if you've forgotten.
I've matured. I laugh at myself. But I have a deep sympathy for the old Unix people that I saw when I was young who had spent a lifetime fighting IBM only to see IBM playing nice with open source and having trouble accepting it at face value. Because no matter how much Microsoft tries to change, I still have trouble seeing them as anything other than the company that they were back in the late 90s, early OOs. Give them opportunity, and I'm still on alert for when the "extinguish" bit is coming.
I'm not sure that's true. Windows did just fine siphoning off marketshare from the commercial Unixes without a POSIX subsystem, and Linux's original supporters were people who wanted a free Unix they could tinker with, and it started gaining market penetration when those people started getting sysdmin jobs and Linux was what they were most familiar with. I can't imagine they'd have used Windows instead just because it had a POSIX userspace- remember, this was when Microsoft was peak Evil Empire.
FWIW I was a die hard Linux user for ten years, and I usually never ran a desktop environment in favor of tmux and vim. About a year ago I started a new job at a dotnet shop and had to switch to Windows. I dreaded it at first, and I still do have to fight the OS to preserve my privacy, but overall I am loving it. Vscode with the new wsl extensions and vim bindings is the best IDE I've ever used, and the new terminal emulator is amazing. The upcoming version of WSL will be shipping a full linux kernel, and having the software library of Windows with the productivity of Linux is a win win as far as I'm concerned. Using so much closed source software makes me feel yucky, but I've never been more productive. YMMV.
Thank you! I hate people on Reddit who advocate for those cheap keys. If you don't follow the license you are pirating it. Simple as that. Now whether or not piracy is okay is a whole different issue.
Pirating Windows is a lot riskier (think downloading random unlock.exe files) while the key gives you a fully working functional official Windows without any hassles.
After 18 years struggling with the Linux desktop, I switched to Windows for my GUI needs. Everything just works, if somewhat slowly. My "working environment" is a Linux VM. If I need to run a graphical app from the VM, there are plenty of rootless X servers for Windows, plus Windows 10 has workspaces.
I get the best of both worlds: real vendor support for hardware, and the best software for getting work done (windows business software, Linux coding software). Honestly, this feels like the future to me, and I hope M$ is working towards this being a normalized hybrid platform.
Wayland is broken by design (or, as another commenter says, "lacking on a conceptual level") and NOT A REPLACEMENT for X Windows.
Wayland has a bonkers security (theatre) model based around protecting from untrusted processes (graphical applications) connected to the compositor. (If you must give untrusted code a process on your machine, the correct solution is to give it its own Unix user and run an X Windows server owned by the same user.)
One consequence is that screenshots/screencasts do not exist on Wayland, applications can only see their own windows. Also, quoting Red Hat: "Furthermore, there isn’t a standard API for getting screen shots from Wayland. It’s dependent on what compositor (window manager/shell) the user is running, and if they implemented a proprietary API to do so."
EDIT: Other stuff that you can not do with Wayland and is justified by their "security" model is injection of input events and reading other applications input (think xdotool, autohotkey; this is great when you want more control of your graphical user interface system).
What you describe as broken by design in terms of security is in fact its primary advantage from my perspective:
With the security nightmare that is modern web browsers I do not want any of them which runs on my machine to be able to access a single bit of more data than which is necessary.
Firejail helps a lot for this purpose, but fixing the browser to not be able to spy on other X applications requires sticking them into a Xpra sandbox, which greatly intereferes with performance and is a hell of an ugly hack (it's like running a local VNC server and connecting locally to it).
Putting software into different user accounts isn't a solution either because the usability of that is well, unusable.
Use different users and/or virtual machines if you want security.
BTW, Firejail is also shit, it had some stupid design decisions and security bugs. And it basically relies on Userspace namespaces which are or were an experimental/unsecure feature in Linux.
Wayland's security model is bonkers because the attack surface it implies is just too great.
I just run chromium as my main user (but usually with a temporary scripted --user-data-dir and HOME environment variable), because it is relatively secure.
And switching Linux consoles/X servers is just two key presses anyway.
... and w.r.t. that: The modern JavaScript jungle that is the Internet is slow enough already. And we're not even getting started with watching videos here...
>"Furthermore, there isn’t a standard API for getting screen shots from Wayland. It’s dependent on what compositor (window manager/shell) the user is running, and if they implemented a proprietary API to do so."
Sway's compositor implements a way to do this as only it has access to all of the display data, but there is no standard way outside of the compositor to do this under Wayland.
The consequence of this is that every program that wants to do a screencast (Skype, Discord, etc) will have to integrate with specific compositors to do this - there is no standard way to do this across all compositors like under X.
Thin terminals brought this to the logical conclusion of not needing a desktop at all, but unfortunately X's protocol is too chatty and sensitive to latency to really shine across a WAN. These days you can work around all of that by opening a session via Guacamole or the like but that's still one tab in a browser per graphical session. There's no just using SSH in a for loop and opening up a raft of xterms to a bunch of machines in Wayland, AFAIK.
From the Wayland FAQ
Is Wayland network transparent / does it support remote rendering?
No, that is outside the scope of Wayland. To support remote rendering you need to define a rendering API, which is something I've been very careful to avoid doing. The reason Wayland is so simple and feasible at all is that I'm sidestepping this big task and pushing it to the clients. It's an interesting challenge, a very big task and it's hard to get right, but essentially orthogonal to what Wayland tries to achieve.