Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“Worse” was Bell Labs: portable C/Unix. As opposed to MIT: codesigned Lisp/LispMs. The Unix workstation was the triumph of COTS micros over LSI or custom VLSI hardware.


> The Unix workstation was the triumph of COTS micros over LSI or custom VLSI hardware.

Well the Sun-1 definitely started there, no question (I don’t remember the Daisy or Apollo hardware). HP definitely never did and Sun (and SGI et al) all went down the custom hardware rabbit hole.

By the time they tried to hop onto the PC hardware train it was too late. None of those companies survive in any meaningful way.

BTW if you catch this in time to edit: you might want to put a hyphen between “co” and “design” because you didn’t mean signing code.


The customness of the hardware is partly relative. One had to guess the trajectory of the PC to bet on clones and their components. Of course at one point there was no question the non-x86 workstation used "custom" hardware vs. the kind of more open ecosystem of x86 PC components, however even in this situation doing custom is not even an absolute criteria for success or failure or even eventual economy of scale: case in point Apple. Now of course there is in-house design vs. OTS but while it was true that the first PCs used pre-existing chips, quickly some chips started to be developed specifically for PCs or at least with PC as the main target, by far. So it is also kind of "custom", just developed by multiple companies.

Now in retrospect some workstation vendors could maybe have survived a little more by switching to x86 PC like hardware except the window for doing the switch was astonishingly small and they would have transformed to either a random OS vendor, or a random PC hardware vendor, or even both (even if requiring their own hardware, their competition would have quickly been way more directly e.g. Linux or BSD on generic PCs, and eventually with e.g. CAD vendors switching to Windows it would not have helped either)

Or as a random hardware PC vendor, what is even the point compared to their initial positioning and what was a "workstation". This market is now taken mostly by chip vendors with more or less artificial market segmentation -- and then computer vendors using such chips but they do not define the platforms anymore and add far less value. It's kind or logical; well at least in retrospect, here too. A very few number of platforms had to remain because of both the network effect and the practicality of using and developing for them. And consumer hardware was bound to eventually get state of the art designs (mostly scaled with parallelism for pro hw + a few artificial market seg)

You can take the internal dev route (again: Apple) but you had to target the general public first to do that (so not appropriate for a WS vendor)

Ironically, we could argue that to survive "in a meaningful way", if I read that in yielding a legacy today that could influence the ws workload by providing them at least a part of the platform, the old-school Workstation vendors would have needed to pivot to more pure component makers (for PCs).


Now in retrospect some workstation vendors could maybe have survived a little more by switching to x86 PC like hardware except the window for doing the switch was astonishingly small and they would have transformed to either a random OS vendor, or a random PC hardware vendor, or even both

I think this window was non-existent: Moore’s Law at the time was turning white boxes into workstations faster than any time-and-money consuming custom engineering could pay back the investment.


I'm not sure about that, even today. The typical white box PC motherboard then (and now) doesn't support 128gb of memory for example, you have to buy a HP Z400 or a Mac Pro (or an old server). Plus they aren't really designed to be maintained, or have space for dual processors. The Sun/HP/DEC/IBM workstations were not purely bought because they were fast, there was often specific software in mind that the end user wanted.

I have two primary machines I do development on, an Acer gaming laptop (a few years old, intel i7 8750h, 32gb of memory) and a truly ancient HP Proliant server with dual Xeons (x5650 and 96Gb of memory). The Proliant is still consistently faster at compiling an Android app than the laptop, despite the laptop having SSDs and the Proliant being 7 years older. There is a lot more to making a workstation than just raw CPU speed which is as true now as it was in the early 1990s when a Sparcstation was the go-to performance machine to have on your desk.


Same here. The 5yo Lenovo trounces the 2yo top of the line x86 MacBook Pro on pretty much everything. When it was young it easily humiliated every computer in the house.


My impression is that they all built desktop minicomputers possible thanks to CPUs like the 68K but moved on to RISC designs when the 68K started showing its age. I would not say the PA-RISC was open, but SPARC had multiple sources and MIPS showed up everywhere. At that period, the x86 was not an option - Sun tried.


Apollo was building bit-slice 68000 emulations (to have an MMU) into the mid '80s, and ran Aegis, their homegrown fully-networked GUI OS, coded in their home-grown Pascal, with their home-grown touchpad pointer and home-grown token-ring network, on those and on actual 68K into the late '80s.

Aegis was inspired by MULTICS, not Unix, and was definitely a better system. They were demand-paging across the network in the early '80s.

One feature I recall stood out: they expanded environment variables in symbolic link text, like /usr/bin -> /usr/$SYSTEM/bin to get a SYSV or BSD Unix flavor, later on. The only Unix that does something similar I know of is Dragonfly.


> Aegis was inspired by MULTICS, not Unix, and was definitely a better system. They were demand-paging across the network in the early '80s.

I’d love to see one of those operating. We have lost so many great ideas we seem to never revisit…


Another was a read() system call that would copy into a caller-supplied buffer if it had to, but would normally just return a pointer into its buffer cache.


I know somebody who has some. You sort of need more than one so the token has somewhere to go.


the SUN-1 was based on the SUN (Stanford University Network) board, but I can't find anything concrete on the SUN board (Also by Andy) other than it existing, and apparently 'cheap/free' to license?

Apparently it formed the foundations of cisco/SGi & SUN.


Indeed, I remember those days well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: