IMO, that's a pretty poor summary of how the "Unix Wars" really ended. Bell UNIX got productized into System V, and the trademark was dumped off on TOG.
But "Unix" was really more of an ideal. The ideal system may not have existed, but a lot of people saw the potential of the flawed heaven in there. Including Stallman and Torvalds. Imagine "Industry-standard APIs" which are actually non-negotiable, and not just some compliance-test. Well, you need the source code, right? We have a much better "unix" now than we ever had with "UNIX".
The “Unix Wars” battle lines were drawn in 1988 with the formation of the X/Open Group and Unix International, the same year Bell Labs put together the 10th Edition’s manual, more or less demarcating its release according to their own conventions even if it wasn’t sold nor distributed. Their next project was Plan 9.
Incidentally this was also the year of the last major version of AT&T’s System V, System V Release 4. There would be a couple more minor releases after that, but there was never a System V Release 5.
> The ideal system may not have existed, but a lot of people saw the potential of the flawed heaven in there. Including Stallman and Torvalds.
I don’t think Stallman nor Torvalds ever saw anything so romantic in UNIX. You could ask them, but it doesn’t jive with well anything in the historical record.
> We have a much better "unix" now than we ever had with "UNIX".
We have better operating systems, yes, and for a price and some elbow grease, some of them can even use the UNIX trademark which checks a box for some people who might care about that sort of thing.
Back in the early 2000s, RedHat did a study and found the UNIX brandname actually had negative value among IT managers. They saw it as "expensive, proprietary, incompatible" etc. Meanwhile Linux was seen very positively. (So yeah nobody cares, not even the Mac users who pretend to.)
You know Meta, the "social media company" came out and said their users spend less than 10% of the time interacting with people they actually know?
"Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)
I don't go to Meta properties to interact with anyone anymore.
Our extended family has a WhatsApp group, but that's about it. I'm not convincing a bunch of 60+ year olds to switch to a new platform they're not used to.
Instagram has been read-only for me for half a decade, I still poke my head on FB now and then to spy on people I used to know.
IIUC, Microsoft bought-out Mono and donated it to Wine, making it effectively legacy. It probably still 'works', but not many big IT departments are going to run critical apps on some old unsupported hackjob.
which I found while doing some performance work at one point. intercooler.js started as a custom function that hooked .load() in based on custom attributes (a trick I learned from angular 1.x)
In case you missed them: check out querySelector and querySelectorAll. They are closer to what the jQuery selector system does, and I think they were inspired by it.
If the verbosity bothers you, you can always define an utility function with a short name (although I'm not personally a fan of this kind of things).
body.qsa('.class').forEach(e=>):
Yes, add qs() and Array.from(qsa()) aliases to the Node prototype, and .body to the window, and you’ve saved yourself thousands of keystrokes. Then you can get creative with Proxy if you want to, but I never saw the need.
Agree if you've a library developer. If you're an app or website developer then it's your project. Everyone else should steer clear of adding to native prototypes, just so they are clean for the end user.
If you are an app or website developer, at least you won't break other's systems.
But you might still break stuff in your own projects. Imagine you extend a native prototype with a method, and later the native prototype starts having a method with the same name.
Newer libraries start using that new standard method.
You upgrade the libraries your website depends on, or add a dependency, and this new code happens to depend on that native prototype. Only you replaced it with your custom method, and that method likely doesn't have the exact same behavior. You broke that new code and fixing this might not be trivial because uses of your custom method are sprinkled everywhere in your code.
It only works if you ever works on projects that have zero dependencies, or dependencies you never update.
Or you could spare yourself the troubles and define a method that takes the node in parameter.
It's also a question of forming good habits: you could be working on your projects now, forming a habit of extending prototypes, but will you remember not to do this the day you write a library?
By the way, how can you be sure you won't move some of your app code to a library because you like your utility functions and would like to reuse them in another project of yours? And why not open source that shared code, so you can just install it with NPM? Bam, that stuff is a library now.
> You upgrade the libraries your website depends on, or add a dependency, and this new code happens to depend on that native prototype. Only you replaced it with your custom method, and that method likely doesn't have the exact same behavior. You broke that new code and fixing this might not be trivial because uses of your custom method are sprinkled everywhere in your code.
He was suggesting adding a prototype method, not replacing one. Unless the library your using is also adding prototypes, I can't think of an issue with this. Sure, if a new version of JS ends up using these names then things could break, but I'd bet this won't cause him a problem in actuality.
Thanks for the feedback. But you did recommend a method that takes the node as a parameter. What protects me from that method name being claimed by some library later in the exact same way?
I used to use prototype (and sometimes scriptaculous)... Then came IE8 and broke the world on me.
For anyone that didn't know, IE8 implemented native JSON.(parse/stringify) methods, and the second parameter is a hydrator/dehydrator... however, if you added custom properties/methods to Array or Object prototypes, they would throw an error you couldn't catch in JS... so to work around, you'd have to load the JSON library and use it under a different name, in ALL your code, because the native implementation was locked/sealed and couldn't be masked out.
Second most annoying IE bug was 5.0.0 and the old/new api's for dealing with select elements. New worked, old broken, have fun.
I hate to sound like a webdev stereotype but surely the parsing step of querySelector, which is cached, is not slow enough to warrant maintaining such a build step.
The problem with jQuery is that, being imperative, it quickly becomes complex when you need to handle more than one thing because you need to cover imperatively all cases.
Yeah, that's the other HN koan about "You probably don't need React if..." But if you are using jquery/vanilla to shove state into your HTML, you probably actually do need something like react.
After having some time to think about it, I've seen some really perverse DOM stuff in jquery. Like $(el).next().children(3) type stuff. So I think this stuff really fell-over when there was 'too much state' for the DOM.
I think if you want to go high-dom manipulation a la jQuery, and want some form of complex state, storing the state _on_ the DOM might make sense? Things like data attributes and such, but I also feel like that’s itching for something more like htmx or maybe svelte (I’ve not looked into either enough, so I may be completely off base).
I do agree with the notion that jQuery is easy to mishandle when logic grows beyond a pretty narrow (mostly stateless) scope. It’s fantastic up until that point, and incredibly easy to footgun beyond it.
Part of me feels the same way, and ~2015 me was full on SPA believer, but nowadays I sigh a little sigh of relief when I land on a site with the aesthetic markers of PHP and jQuery and not whatever Facebook Marketplace is made out of. Not saying I’d personally want to code in either of them, but I appreciate that they work (or fail) predictably, and usually don’t grind my browser tab to a halt. Maybe it’s because sites that used jQuery and survived, survived because they didn’t exceed a very low threshold of complexity.
I'm mixed on HTMX for going a step beyond interactive forms, it's fine... but much more and I find HTMX and server-side Blazor for that matter really janky... button events with a round trip to the server can just feel wrong.
FWIW, also hated the old ASP.Net Webforms round trips (and omg massive event state on anything resembling dialup or less than a few mbps in the early 00's).
I just wish that React and other SPA devs kept some awareness of total payload sizes... I'm more tolerant than most, but you start crossing over/into MB of compressed JS, it's too much. Then that starts getting janky.
There are some really retrograde government and bigcorps, running ten year old infrastructure. And if that is your customer-base? You do it. Plus I worked on a consumer launch site for something you might remember, and we got the late requirement for IE7 support, because that's what the executives in Japan had. No customers cared, but yeah it worked in IE7.
Oh, certainly, corporations run ten-year-old software. But for the record, IE 11 turns 13 this year [1]. Which makes it somewhat more surprising to me.
My reading is that they’ll support Edge’s IE 11 compatibility mode until then, but that IE 11 is already EOLed except for a couple of extremely niche enterprise versions.
As an aside, most histories conclude that IBM "fucked up" and allowed a PC clone ecosystem to grow-up, so they only dominated PCs for 7 years or so, and after that it was too late to put the genie back in the bottle.
But if you look at the market for business PCs back in 1981, it was completely open anarchy surrounding the CP/M-80 "8-bit PC standard". There were hundreds of vendors, including some big names in the computer industry. (IBM certainly didn't care about Commodore and etc. And as the article mentions, this took tons of support time from understaffed DR.)
So I think IBM quite strategically wanted to define the "16-bit PC standard", which they controlled for most of the decade. And Microsoft was the most compliant vendor. (There were antitrust restrictions on IBM too...)
On a larger level, it sounds like DR thought 16-bit was just a repeat of the early 8-bit computer industry and didn't really understand what IBM was up to.
On the software end, after reading a couple of books about MSFT in that era (pre-Win 3.0), I think it is clearly manned by people who are smart and determined to be dominant in the business. I think they are more serious than a lot of their competitors.
I think the key was that Microsoft could see the market was expanding exponentially. I think grasping the effects of exponential change is difficult for the human mind but I think Gates, et al grasped that.
Exchanging a smaller short term reward for control of something exponentially growing is obvious in hindsight but a brilliant insight at the time.
There other insight was to go for the business and not home market like so many, frankly, better products in the 80s. The difference being that businesses replace their equipment once it’s been written off. The cycle times for home and educational users being much longer.
IBM themselves concluded that they fucked up with the PC. The PS/2 line was their second bite at the apple; the Micro Channel bus was proprietary. If they couldn't actually put the genie back in the bottle, it wasn't for lack of trying.
Most people are not aware that after the failure of the PS/2 attempt to control the PC market, IBM tried a third time using 7 patents it had for the PC AT (they didn't have any on the original PC or the XT). In the first half of the 1990s they went after the chipset makers (mostly in Japan at that time) and in the second half of the 1990s they went after the PC makers themselves all around the world. The would threaten to sue for all machines made up to that point unless they licensed not only the 7 AT patents (which would expire in 2001) but also a bunch of other unrelated patents that were much newer. As far as I know everybody signed the deal, which meant that IBM could make money without actually making any PCs themselves.
This is true, but the patents were all RAND-licensed, so the press reported that IBM made about $5 per PC. Which isn't nothing, but IBM had no ability to restrict PCs market segments (like they had with Microchannel). So we soon saw "commodity" PC servers and even midrange systems, stealing IBM's bread+butter.
In 'Managing Technical People', 1997, page 199, Watts Humphrey says that, after several failed attempts to produce a PC by IBM procedures, they set up an independent team that could skip the procedures as necessary to get the job done. This worked in the short term but it had two side-effects that were catastrophic in the long term: they lost control of the operating system to Microsoft, and they also lost control of the chips to Intel. He says both of these side-effects would have been caught by the checks inherent in the normal IBM procedures.
They might have caught the mistakes but I think the resulting product would have been a flop.
If IBM hadn't done what it did somebody else would have dominated the market with a product to fit the same niche. Perhaps somebody "downmarket" in the more consumer space who managed to punch upwards -- maybe Apple who had some business success with the Apple II + VisiCalc, etc. or maybe Kaypro or somebody in the CP/M space. Or perhaps somebody else "upmarket" like DEC, who came too late the personal computer space with products that nobody really bought (DEC Rainbow, etc) but maybe they'd have had more success if IBM hadn't gotten in there.
The market wanted a relatively open product to innovate in. When the PS/2 came along a few years later with proprietary bus, etc and tried to put the genie back in the bottle, it flopped.
One of my favorite scenes from Pirates of Silicon Valley is when Steve Ballmer (played by John DiMaggio, yes, the voice of Bender) narrates an audience aside, positively giddy with Ballmer glee at how his clever friend Bill put one over on those stodgy fellows from IBM and got them to give away the golden goose. It was like harpooning the great white whale. While it was a telefilm and thus a fictionalization, near as I can tell things happened pretty much as described.
I have been in similar meetings a couple times, when the client didn't really understand the long-term ramifications of they game plan - and we did. Of course, we weren't as fortunate as Microsoft, but on one instance I remember clearly - and where I made remarks privately as we left the building - the client was more or less forced to acquire, for a unreasoinably high amount, the tiny vendor a couple years down the line.
A meaningful gamble IBM made at the time was whether the BIOS was copyrightable - Williams v. Artic wasn't a thing until 1982, and it was really Apple v. Franklin in 1983 that left the industry concluding they couldn't just copy IBM's ROMs.
AFAICT, back in 2010 they had a partner who used a scummy email vendor. And he's still trying to re-litigate that? Email is so untrusted at this point, it seems not worth dredging up. The original site is gone and is now an AI startup.
Also to add, before Mailchimp and Sendgrid etc, there weren't many obviously reputable vendors in the email space. The business people were dealing with a salesman who was sure you wouldn't getting spam holed.
It's an interesting discussion. Also, a lot of stuff from the 19th century only became famous after copyright expired (and only sometimes was the author able to profit from this, other times they died in poverty.)
28+28+... might be a better model. I know there's a bunch of decent stuff from the 1990s which will never again have any real economic value, but would be fine for soundtracks or tictoks or MAME or etc.
But "Unix" was really more of an ideal. The ideal system may not have existed, but a lot of people saw the potential of the flawed heaven in there. Including Stallman and Torvalds. Imagine "Industry-standard APIs" which are actually non-negotiable, and not just some compliance-test. Well, you need the source code, right? We have a much better "unix" now than we ever had with "UNIX".
reply