Linus' and Alan's citations aren't incompatible. Actually, I think they're both true. Yes, massively parallel trial-and error works wonders, but if you favour the first solutions, you'll often miss the best ones. Actually, effects such as first time to market, backward compatibility, or network effects often trump intrinsic quality by a wide margin. (Hence X86's dominance on the desktop.)
Yes, Worse is better than Dead. But the Right Thing dies because Worse is Better eats its lunch. Even when Worse actually becomes Better, that's because it has more resources to correct itself. Which is wasteful.
The only solution I can think of to solve this comes from the STEPS project, at http://vpri.org : extremely late binding. That is, postpone decisions as much as you can. When you uncover your early mistakes, you stand a chance at correcting them, and deploying the corrections.
Taking Wintel as an example, that could be done by abstracting away the hardware. Require programs to be shipped as some high level bytecode, that your OS can then compile, JIT, or whatever, depending on the best current solution. That makes your programs dependent on the OS, not on the hardware. Port the compiling stack of your OS, and you're done. If this were done, Intel wouldn't have wasted so much resources in its X86 architecture. It would have at least stripped the CISC compatibility layer over it's underlying RISC design.
But of course, relying on programmers to hand-craft low-level assembly would (and did) make you ship faster systems, sooner.
"Taking Wintel as an example, that could be done by abstracting away the hardware. Require programs to be shipped as some high level bytecode, that your OS can then compile, JIT, or whatever, depending on the best current solution. That makes your programs dependent on the OS, not on the hardware. Port the compiling stack of your OS, and you're done. If this were done, Intel wouldn't have wasted so much resources in its X86 architecture."
Unfortunately, that could have only happened in an alternate universe in which humans had developed more advanced technology by the 1980s. At the time Windows and the Intel x86 architecture were being developed (Windows 1.0 was released in 1985), computers did not have the speed or the memory to add another layer of abstraction. From http://en.wikipedia.org/wiki/Windows_1.0:
"The system requirements for Windows 1.01 constituted CGA/HGC/EGA (listed as 'Monochrome or color monitor'), MS-DOS 2.0, 256 kB of memory or greater, and two double-sided disk drives or a hard drive."
Yes, that's 256 KILO bytes. And a double-sided disk drive (5.25" floppy disk) was at most 1.2MB. The Intel 80386, the first 32-bit member of the x86 family, was released in 1985, so Windows had to be developed on and run on the 16-bit 80286. And it had to run all the old MS-DOS programs that were written in assembler, if Microsoft wanted to retain its existing customers.
There were hardly any layers of abstraction to be had in those days. The OS kernel (MS-DOS) was still being written in hand-optimized assembler.
The subsequent history of the x86 product line was determined by the requirement that object code had to be backward compatible. Which brought us to what we have today.
> Unfortunately, that could have only happened in an alternate universe in which humans had developed more advanced technology by the 1980s.
I agree. Too bad.
> The subsequent history of the x86 product line was determined by the requirement that object code had to be backward compatible.
Of course. My point is, this sucks. Collectively, we could have done much better much sooner, but our current organization scheme (mostly free market) compels us to do otherwise.
We are talking about the fastest spread pf a truly disruptive technology in the history of mankind prior to mobile communications. Why was anything too slow.
Another option is to design for easy migration, that is easy technically and business-wise.
For example, iOS programs are native code, but if Apple decided to do switchearoo and use a different instruction set CPU such as x64 they can do that with relative ease: ship new Xcode, ask developers to recompile and produce fat binaries (current binaries are already fat - arm v6 and arm v7, so the foundation is laid out already), and developers will have been around and motivated to recompile becuase of the financial incentive of new sales. Where old developer is not around, competition is sure to spring up because the market place makes it easier for demand to elicit supply. So combination of technical and business decisions created situation where Apple can switch CPU if they need to. This clearly stems from their previous experience of switching from Motorola to PPC and from PPC to x86.
Similarly, when designing a database I try to keep in mind that I will have to redo it, so I try to avoid irreversible operations and instead strive to preserve original data in my schema, so that I can always recompute all destructive operations (such as aggregates) later, even if the set of such operation changes over time.
Works, but some programs will be orphaned, and thus won't be recompiled. Plus, this is an effort that could have been avoided through a more future-proof approach.
It's ok for apps to be orphaned - when a healthy developer ecosystem is in place then a replacement will be coded to meet any substantial demand.
And the effort could not be avoided - you can only trade present effort for future effort, that is more work put upfront into CPU design vs more work later on to convince developers to recompile the apps. The problem with the former is that you make all the effort and still don't get the flexibility, whereas in the latter case the extra flexibility can can in handy in more ways that one, and some of which we cannot foresee today.
Yes, but the software has to be compiled to the architecture too using that solution. So it is just putting the problem on to others: the users, the software developers, etc. - many of whom will happily just use something else.
> Taking Wintel as an example, that could be done by abstracting away the hardware. Require programs to be shipped as some high level bytecode, that your OS can then compile, JIT, or whatever, depending on the best current solution. That makes your programs dependent on the OS, not on the hardware. Port the compiling stack of your OS, and you're done.
A much better idea is already practiced as the most successful method of software distribution ever: JavaScript, or as I call it, source-only distribution.
Great essay -- I agree with its main point: "worse" products triumph over "the right thing" when they are a better fit for the evolutionary and economic constraints imposed by an evolving competitive landscape.
Some examples:
* In the case of the rise of Unix, the market of the 1960's and 1970's valued simplicity and portability over "correct" design.
* In the case of the rise of the x86 architecture over the past three decades, the market valued compatibility and economies of scale over the simplicity and elegance of competing RISC architectures.
* In the case of the current rise of ARM architectures for mobile devices, today's market values simplicity and low-power consumption over compatibility with legacy x86 architectures.
Yes, a great explanation, especially the wrap-up at the end.
You can see how people with too narrow a view of technical design would misconstrue this point. If you're hell-bent on (shallow) perfectionism, you perceive the dichotomy as perfect vs imperfect. One step more healthy is simple vs. complex, because it means you've recognized that time is finite, and too much complexity can legitimately kill a project; it's one of the first real-world constraints on viability that intrudes. "Viability" is really the goal, though, typically, when you include the motivations of the humans involved, like seeing their project be successful and have an impact.
The more you can align your moral compass with viability rather than turning smaller issues into battles between good and evil, the more successful you will be.
Actually, making it simple is often harder. If it where things like the STEPS project[1] would be widespread by now.
Unix didn't won because it was simpler. It won because it was easier to implement. In terms of overall simplicity, Lisp systems were probably far ahead.
There were Lisp systems that were simpler than Unix, such as AutoLISP, Scheme, and later XLISP, but the ones that were competing with Unix in the 1970s were things like MACLISP, Zetalisp, and Interlisp, which were much more complex than Unix was at the time. I mean, Zetalisp had its own microcode, its own hypertext documentation, its own GUI, transparent persistence, and a WYSIWYG text editor, at a time when Unix had a couple of C compilers, man pages (its own typesetting system, to be fair), @ and # as the defaults to erase a line and a character, and ed as the standard text editor.
That's just the point, neither imperfection nor simplicity leads to viability, but taking less time and delivering something more appropriate does. You can make something simpler by stopping too soon or thinking too long; make something imperfect by not testing enough or sitting around adding bugs. ;)
> If you're hell-bent on (shallow) perfectionism, you perceive the dichotomy as perfect vs imperfect. One step more healthy is simple vs. complex, because it means you've recognized that time is finite, and too much complexity can legitimately kill a project
What follows is the recognition of the difference between complex and complicated. One can lose time focusing on the complexities, and create a time-consuming complicated system. Or one can build a simple system which allows (or will allow) for complex possibilities (but is not complicated) and shrink down time consumption by a significant margin.
Also, the Web. One of the main reasons Xanadu's in development hell is how it tries to handle broken links. On the Web, the browsers just throw up a 404 error. This is unacceptable for Ted Nelson.
I never really got the "Worse is Better" essay. It obviously doesn't mean what everyone says it means and what it does mean isn't clear. This post points some of that out. For example, Worse in the essay was associated with simplicity. But the classic examples of Worse triumphing in the marketplace (the OP cites x86 as an example) are anything but simple: they are hypercomplex. Not only that, their complexity is largely what makes them Worse. Simplicity is rather obviously Better, not Worse. Smalltalk (which the OP cites as Better) is far simpler than its more successful peers. The more you look at the original essay, the more its conceptual oppositions seem muddled and at odds with history.
I've concluded that it boils down to exactly one thing: its title. "Worse is Better" is a catchy label that touches on something important about technology and markets and means different things to different people.
Such essays are always about concepts that cannot be precisely defined. That aside, I believe that it wasn't really simplicity that "worse" was associated with but easyness.
This distinction is well described by Rich Hickey (of Clojure fame) in his "simple made easy" talk. The key point is that simple-complex and easy-hard are two separate axes and that "easy" usually leads to "complex" (the case of x86, I believe) but you can invest some effort in shaping the environment so that "simple" can be "easy".
they are hypercomplex. Not only that, their complexity is largely what makes them Worse. Simplicity is rather obviously Better, not Worse. Smalltalk (which the OP cites as Better) is far simpler than its more successful peers.
It isn't about simpler it is about simpler for who. How is throwing away a million lines of C and starting from scratch in a language you have never used simpler than gently adding OO through C with classes? X86 and X64 maybe a terrible nightmare of complexity for somebody, but my apps keep working without change or recompile which is simplicity itself for me. Simplicity for you and simplicity for your customer are two entirely different things. C and Unix spread because it was capable of running everywhere and that provided real simplicity to the end customer(my software could run on lots of different hardware.)
x86 became hyper-complex, and won the market, because x86 chips have always remained backwards-compatible with previous x86 chips and software. Some issue as with Windows.
I often think about software development in similar terms -- evolution versus intelligent design.
The weakness of evolution is that it takes millions of years, it's heavily dependent on initial conditions, there's lots of collateral damage, and most lines die out.
The weakness of intelligent design is that we're only so intelligent, which places a pretty low limit on the possible achievement. (And intelligence is generally regarded as close to a normal distribution, meaning that the smartest people can only handle a small multiple of the complexity of the average person).
Obviously, evolution and design need to be combined somewhat. The question is: how much of each, and at what times during a project? Do you spend 10% of the time quietly planning, 10% arguing with a small group of designers, and 80% trying things and trying to get feedback? Or is it more like 40%, 40%, and 20%? And how do you mix trying things with the designing things?
Thank you Yossi for writing this piece. It's about time that Worse is Better argument was debunked. Worse isn't better, portable, free (libre, gratis, or at least really cheap) is better.
What many people forget is that during the time frame Worse is Better talks about, Lisp machines cost as much as two or more houses. You couldn't get a decent Lisp system on affordable hardware until MCL, and then you still needed a fairly high-end Mac to run it on.
OTOH, Unix and C-based software ran on a bunch of different machines, which you either already had or could acquire inexpensively. The software was easy to get and inexpensive as well. Then 4.3BSD and Linux came along, and you couldn't beat that on price.
What makes you think that Haskell is rising in popularity? While this may be the case on sites like HN and Lambda the Ultimate, more broadly Haskell is nowhere. It certainly hasn't come anywhere near anything that could be described as success in the marketplace.
So your example actually supports the author's thesis, not your own.
An enjoyable and stimulating read. The original essay, by virtue of a few semantic ambiguities (what is "simple" anyway?) is apt to invite this sort of commentary.If I have read this correctly, the author eventually agrees that worse really is better, with the clarification on what this means outlined in the first part of the essay.
However, I was hoping to see a deeper analysis of how the nature of the evolutionary pressure in his domain contributed to the worse is better effect (I am an evolutionary biologist, so I find this kind of thing interesting). For example, if the "product" in question was a mathematical concept of interest to professional mathematicians, there almost certainly be a niche space in which version of the concept exhibiting "consistency, completeness, correctness" will dominate over the competition. For mathematicians consistency and correctness are strongly selected for (completeness, broadly defined, is usually much harder to obtain). For a the average iPhone app, these things still matter, but in a very indirect sense. They get convolved (or low passed, as Alan Kay describes) with other concerns about shipping dates and usability and so on. I would be interested to see a classification of different domains in which "worse is better" and "the right thing" philosophies dominate, and those in which they are represented in roughly equal proportions.
It's not too instructive to look back on things that occurred mostly due to happenstance and try to assign reasoning to it.
And it's a bit of a stretch to associate Linux with "Worse is Better." A major reason for using Linux in the early days was that it was the best alternative to Windows 95 because it got process isolation on x86 right.
I don't see how to reconcile your first paragraph and your second paragraph. First you dismiss your virtual opponent's argument as trying to assign meaning to a random (or incomprehensible) outcome, then you turn around and assign meaning to the same thing.
This actually reminds me of the Plato/Artistotele difference. Plato held that there was an ideal, perfect version of everything in a sort of Idea Heaven, and the goal of the philosopher was to get ever closer to understanding that ideal.
Aristotele, on the other hand, thought that Heaven was too remote, and held that we could learn more by measuring what we see in this world. As opposed to the presumably ideal, but unaccessible, concepts in Heaven.
The medieval church loved Plato, the scientific revolution loved Aristotele.
My point is that the difference between these two frameworks for interpreting the world seems to be fundamental. Fundamental in the sense that the distinction has been with us for at least a couple of milennia, and we are apparently not likely to agree on a single answer anytime soon.
It's funny you bring this up, because Platonic idealism has been thoroughly debunked in the past century. Some of the latest thinking on the subject is known as "new materialism," and its core tenet is exactly the "technical evolution" that Yossi talks about in the article. I recommend Manuel de Landa's War in the Age of Intelligent Machines for an introduction (it's as near to a hacker's philosophy book as I've seen).
I try really hard to not take a left vs right view in software design.
I sometimes build systems that are overengineered and I sometimes build systems that are underengineered.
I do believe that every line of code, every function, every class, every comment, every test, everything, is like a puppy that you have to take care of.
If a team adds a "dependency injection framework" that adds a parameter to each and every constructor in a system that has 800 classes, which that's a real cost that's going to make doing anything with that system harder.
I'm a big believer in "cogs bad" because I've seen large teams live the lesson.
From my viewpoint the perfect system is as simple as possible, but well engineered.
My angle at the problem is the concept of "engineering debt": if a well-designed product is the state of being "debt-free", and a deviation from good design is a unit of "engineering debt". That debt will have to be serviced in the form contortions you have to make to work around the design flaws, and then eventually paid down in the form of rewrite, or discharged in an engineering bankruptcy (such as abandoning the product).
Engineering debt, much like financial debt, is an instrument one can use to trade some present-point expenditure for a larger future expenditure. Where one makes sense so often does the other.
Sadly, engineering debt is much harder to account for. Old companies are carrying huge amount of debt and are often times oblivious to it.
I think we could advance the state of the art if were to find a way to quantify engineering debt. As a starting point I suggest a ratio of line changes aimed at servicing vs. line changes aimed at creating new features. If 100 lines of new functionality require 10 lines of base code changes, the debt is low. The opposite is true, the debt is high. I believe such metric could speak to both business managers and engineers, so it provides a good common ground for the two groups of reach consensus and prioritize work.
The woes of high debt has always been my argument against all debt since very early experiences as a programmer where I had to recover from a high debt situation. I thought I was being clever in pointing out a cost that is overlooked by divorced-from-details managers and blew it out of all proportion. What I forget, and is the main point I take from this essay, is that the cost of keeping a low debt increases exponentially as the debt decreases.
> I think we could advance the state of the art if were to find a way to quantify engineering debt. As a starting point I suggest a ratio of line changes aimed at servicing vs. line changes aimed at creating new features. If 100 lines of new functionality require 10 lines of base code changes, the debt is low. The opposite is true, the debt is high
That’s the thing with this debt. You can only quantify it once you’ve paid it back because its quantity is predicated on the cost of paying it back, which differs depending on your aptitude for doing so. And because it’s invisible neurotic programmers like me can start to actively fear it, leading to poor decisions.
What's really funny is that he says Google is ruled by people who only care about product and that platform is suffering, whereas amazon is all services by edict. But the most successful Android tablet product out there is the amazon Kindle Fire. So Google didn't even beat amazon on product and even ended up having to copy it with the Nexus 7.
I think the lesson is, "whatever is available tends to propogate, even if it is shit. Especially if it is for a large mass of humans, who tend to act stupidly in mass."
You can see it all over the place. How great of an Internet provider was AOL, for example?
Yes, Worse is better than Dead. But the Right Thing dies because Worse is Better eats its lunch. Even when Worse actually becomes Better, that's because it has more resources to correct itself. Which is wasteful.
The only solution I can think of to solve this comes from the STEPS project, at http://vpri.org : extremely late binding. That is, postpone decisions as much as you can. When you uncover your early mistakes, you stand a chance at correcting them, and deploying the corrections.
Taking Wintel as an example, that could be done by abstracting away the hardware. Require programs to be shipped as some high level bytecode, that your OS can then compile, JIT, or whatever, depending on the best current solution. That makes your programs dependent on the OS, not on the hardware. Port the compiling stack of your OS, and you're done. If this were done, Intel wouldn't have wasted so much resources in its X86 architecture. It would have at least stripped the CISC compatibility layer over it's underlying RISC design.
But of course, relying on programmers to hand-craft low-level assembly would (and did) make you ship faster systems, sooner.