Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Windows NT is designed out of the box for extending and embracing Unix. The whole Linux Subsystem thing isn't something new that required deep reworking of the kernel.


You should read the book Showstopper! to learn that NT was actually designed to be as far away from Unix as it could be. Dave Cutler, NT’s chief architect, hated Unix with a passion. He thought it was a rubbish OS. The internals are based on VMS, Cutler’s previous OS. That’s why NT has never been a good posix system and why microsoft has essentially given up with WSL2 and is now just running linux in a vm.


Great book. Another interesting wrinkle that’s been somewhat lost to time is that (as the book documents) NT was developed simultaneously on x86 and a RISC architecture (MIPS I believe).


The Linux Subsystem actually doesn't use the NT subsystem technology that you're thinking of. They did end up inventing a few new kernel concepts (like pico processes) in order to do WSL v1.


Indeed. There was a windows services for Unix subsystem based on Xenix mentioned elsewhere and that was based on the subsystem architecture.

When you use it, you get a nice Korn shell and it is built on PE binaries linked against PSDLL.DLL. there's a functioning but very old version of GCC that ships with it.

The PE binaries mark up the desired subsystem to be invoked so you don't have to be in the environment to execute one - the kernel takes over.

PSDLL acts as a translation layer for NT much as kernel32 does for win32. You can't run unmodified Linux binaries like you can with wsl. On the other hand, WSL requires that you invoke lxss with some special com magic to get access to Linux first so you can't just exec an elf file directly. The Pico processes you mentioned - these allow the kernel to install specific handlers/translators of their syscall functionality into the windows kernel.

So yeah architecturally they're pretty different and WSL isn't really the same subsystem concept they started with. On the other hand it that's probably a good thing because everything needed a rebuild for SUA.


I'm convinced that SUA system only exists so Windows can claim "POSIX compliance" as required for various government contracts.


On the other hand, WSL2 is based on virtualisation rather than NT kernel personalities. Apparently building it 'on top' or 'inside' NT ends not not being good enough.


I don't think that's a failure of the NT subsystem approach, I think that's just that Linux turned out to have a massive and changing ABI surface and Microsoft didn't want to try and recreate the whole thing by clean room reimplementation. Yes, there were some difficulties because of different underlying primitives, but in my outsider's opinion, they could have made it work if they've been wanting to spend the time and effort.


The problem they couldn't solve is file system performance -- there's just too much of difference conceptually between files in Windows and files in Linux to make it perform reasonably well for the sorts of jobs people were using.

In the end, it just makes more sense to pull in the actual Linux kernel than to try and achieve the same performance semantics.


Windows file system performance in general is abysmally bad, we are talking Linux being 10x-100x faster on mass operations on small files for instance.

Due to this lots of Linux stuff is based around huge masses of tiny files (build processes, VCS, docker, etc) and there was just no chance the windows kernel was ever going to come remotely close performance wise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: