Honestly, for me, the loss of resource forks in the transition from Classic Mac OS to Mac OS X was a real sore spot for me. Sure, a UNIX-based OS like OS X was going to facilitate a different paradigm for file handling by default, but Apple really should have found a way to keep resource forks as a thing. I loved how intuitive file handling was in Classic Mac OS. No pesky three letter file extensions driving program associations and the like.
Stewart Cheifet and Gary Kildall were a dynamic duo. Really appreciated the awareness they gave to the general public about computing and the wave of the future.
Very much agree. Being in the UK I never saw the original broadcasts, but I've enjoyed them on YouTube. (the ability to "time-travel" with YouTube never ceases to amaze me.)
We had a similar pair over here in .uk -- Chris Serle and Ian McNaught-Davis. They had a down-to-earth way of presenting the like of which we will not see again, mainly because of their characters, but also of the context in which they were presenting: as you say seeking to make the public aware about the wave of the future. (https://en.wikipedia.org/wiki/The_Computer_Programme (the theme music of which was Kraftwerk's "Computer World", no less!))
Too many people have a vested interest in keeping Bitcoin going for as long as possible, sadly. It's going to take a massive black swan of some kind to shake their faith.
Heck, they can embed CSAM into the Bitcoin blockchain and that won't stop anyone from using it, because above all else, line must go up.
ISA slots were definitely rather rare on motherboards by the time you got to the Pentium 4 era, so that's cool that you managed to find one that also offered DMA, since I believe Sound Blaster cards needed that to properly function.
I think I would have done the same with my AWE64 Gold if that was still an option for me in the early 2000s.
Having googled it a bit now, it's fully possible I have my wires crossed, since I know that P4 machine had the SiS 645 chipset which of course had built in audio.
I definitely used the Sound Blaster with my 486DX100, and I recalled migrating it to at least one other machine after that; it was nice for the joystick port and also the better wavetable synth on classic games.
This is the part where I just stick with console for specific titles, like the GTA franchise. I feel like Rockstar treats the PC as a second class citizen anyway.
I bought my PS5 Pro in anticipation of GTA 6 and the (hopefully) upcoming DragonQuest 12. May my prayers be answered.
Console doesn't work for anything but platformers. Competitive online gaming = PC (Aoe2, BF6, Dark and Darker, Swordai, MWO, The Finals, War Thunder, PS2 etc.).
I definitely get the appeal of running FPS and such on PC. I'm much more accurate with a mouse and keyboard combo over a controller, but I'm appealing to the strengths a console does have as well. I own both so I can change up my experience to whichever offers the best one for me.
I do feel there might be a day of reckoning where Nvidia bet the farm too hard on this AI bubble and it ends up blowing up in their face.
I hope gamers, systems integrators, and regular PC enthusiasts don't have memories of goldfish and go back to business as usual. It needs to hurt Nvidia in the pocketbook.
Will this happen? Unlikely, but hope springs eternal.
NVidias share price will take a hit when consolidation starts in AI, because their business won't be growing as fast as their PE ratio implies. Also the circular deals could hurt them if one of the AI providers they've invested in goes bust.[1],[2]. They won't go out of business but holders of their shares may lose lots of money. But will this happen after Anthropic and OpenAI have their IPOs, possibly next year? NVidia stands to make a lot on paper if those IPOs do well.
If OpenAI has their IPO, this is likely going to result in retail getting fleeced, given how their return on their investments to date has been absolutely pitiful. They are seeing revenues of around $13 billion for 2025, with an alleged over $100 billion or more by 2030, but the investments they are making are orders of magnitude greater. Who is ultimately going to pay for this?
Surely OpenAI has customers buying their pro packages for ChatGPT, but that can't really be it. And businesses are starting to realize that AI can't replace the workforce that easily either.
Hardly taking this personally. Just calling out how I see it going most likely. Also... Nvidia has done quite a bit unethically. Namely violating anti-monopoly laws (though with our current US administration - they may as well be not worth the paper they are printed on), screwing with product reviewers, pulling a 90s-era Microsoft to obliterate their competition at all costs, and screwing over their board partners, like EVGA. GamersNexus on Youtube has covered plenty of this.
That said, although AI has some uniquely good applications, this AI mania is feeding into some ridiculous corporate feedback loop that is having a negative impact on the consumer.
Having to pay several thousands of dollars for a top tier consumer GeForce when it was possible to do the same with only a few hundred dollars less than a decade ago is telling me the customer is being taken for a ride. It stinks.
I don't get this. Nvidia didn't "bet the farm" on AI. They are simply allocating limited resources (in this case memory) to their most profitable products. Yes, it sucks for gamers, but I see Nvidia more reacting to the current marketplace than driving that change.
If/when the AI bubble bursts, Nvidia will just readjust their resource allocation accordingly.
I also don't understand common sentiment that if/when the AI bubble pops and hardware manufacturers come crawling back, we consumers are going to make manufacturers regret their decision.
Isn't the whole problem that all the manufacturers are pivoting away from consumers and toward AI? How are we going to "hurt Nvidia in the pocketbook?" Buy from their competitors? But they are also making these pivots/"turning their backs on us." Just abstain from buying hardware out of protest? As soon as prices go down there's gonna be a buying frenzy from everyone who's been waiting this whole time.
If/when the bubble pops, manufacturers will find that they can't butter their bread like they could when the datacenter craze was booming. In a world that is paved by growth, companies aren't very good at shrinking.
It doesn't matter what consumers do or don't do -- we plebians are a tiny portion of their present market. We can buy the same GPUs from the same folks as before, or we can do something different, and it won't matter.
Whatever we do will be a rounding error in the jagged, gaping, infected hole where the AI market once was.
This is an even-handed take. I still think consumers in general should vote with their wallets, even if all of them put together won't hold a candle to their datacenter customers. If nothing else, it can grant the competition more market share, and maybe AMD and Intel can invest more into Radeon and Arc, respectively. That can only be a good thing, since I'd love to see more broad support for FSR and XeSS technologies on games, and ROCm and oneAPI for compute.
Oh, for sure. It's often good to bet on the underdog in a competitive market -- it helps ensure that competition continues to exist.
When I sold PC hardware, I'd try to find the right fit for a customer's needs and pricepoint. Way back then, that often meant selling systems with relatively-inexpensive Cyrix or AMD CPUs and more RAM instead of systems with more-expensive Intel CPUs that had less RAM at any given price -- because those were good tradeoffs to make. By extension, I did a very small part to help foster competition.
But gamers drive the bulk of non-datacenter GPU sales and they don't necessarily act that way.
Having observed their behavior for decades, I feel confident in saying that they broadly promote whatever the top dog is today (whether they can afford to be in that club or not), and aren't shy about punching down on those who suggest a less-performant option regardless of its fitness for a particular purpose.
Or at least: The ones who behave this way sure do manage to be loud about it. (And in propaganda, loudness counts.)
I suspect they'll be fawning over nVidia for as long as nVidia keeps producing what is perceived to be the fastest thing, even if it is made from pure unobtanium.
I had one of those for what seemed like an eternity.
At first, right out of the gate: I overclocked it from 300MHz to 350MHz just to see what would happen. It worked perfectly without further adjustment (and the next step did not), so I left it right there at 350MHz. For the price, at that time, it kept up great compared to what my peers had going on.
As the years ticked by and it was getting long in the tooth, it stayed around -- but it shifted roles.
I think the last thing it was doing for me was running a $25 SoundBlaster Live! 5.1's EMU10k1 DSP chip under Windows, using the kX audio drivers.
kX let a person use that DSP chip for what it was -- an audio-oriented DSP with some audio-centric IO. With kX, a person could drop basic DSP blocks into the GUI and wire them together arbitrarily, and also wire them into the real world.
I used it as a parametric EQ and active crossover for the stereo in my home office -- unless I was also using it as a bass preamp, in a different mode. Low-latency real-time software DSP was mostly a non-starter at that time, but these functions and routings were all done within the EMU10k1 and end-to-end latency was low enough to play a bass guitar through.
Of course: It still required a computer to run it, and I had a new family at that time and things like the electric bill were very important to me. So I underclocked and undervolted the K6-2 for passive cooling, booted Windows from a CompactFlash card (what spinning HDD?), and hacked the power supply fan to just-barely turn and rotate air over the heatsinks.
It went from a relatively high-cost past-performer to a rather silent low-power rig that I'd remote into over the LAN to wiggle DSP settings on that only had one moving part.
Neat chips, the K6-2 and EMU10k1 were.
Fun times.
(And to bring it all back 'round: We'd be in a different place right now if things like the K6-2 had been more popular than they were. I don't know if it'd be better or worse, but it'd sure be different.)
Dude seriously this is such a nice story. I especially love how you used the EMU10k1 DSP in conjunction with your K6 system to its fullest potential. :D
Speaking of sound cards, I distinctly remember the Sound Blaster Audigy being the very last discrete sound card my dad obtained before we stuck with AC’97, and later the HDA codec audio solution on the motherboard.
I do vaguely recall the kX drivers you mentioned, but I’m pretty sure we stuck with whatever came stock from Creative Labs, for better or for worse. Also… that SB16 emulation under DOS for the Live! and Audigy series cards was not great, having been a carry over from the ENSONIQ days. The fact that I needed EMM386 to use it was a bit of a buzzkill.
On the K6-II+ system we had, we used an AWE64 Gold on the good ol’ ISA bus. Probably my favorite sound card of all time, followed by the Aureal Vortex 2.
Sound cards were cool. Kids these days with their approximately-perfect high-res DACs built into their $12 Apple headphone adapters don't know what it was like. ;)
My mom had a computer with a SoundBlaster 16. I carried that sound card across the room one day for whatever reason a kid does a thing like that, and it got zapped pretty bad with static. It still worked after that, but it learned the strangest new function: It became microphonic. You could shout into the sound card and hear it through the speakers.
But other than being microphonic, the noise wasn't unusual: Sound cards were noisy.
At one point around the turn of the century, I scored a YMF724-based card that featured an ADC stage that actually sounded good, and was quiet. I used this with a FreeBSD box along with a dedicated radio tuner to record some radio shows that I liked. That machine wasn't fast enough to encode decent MP3s in real-time, but it was quick enough to dump PCM audio through a FIFO and onto the hard drive without skipping a beat. MP3 encoding happened later -- asynchronously. It was all scheduled with cron jobs, and with NTP the start times were dead-nuts on. (Sometimes, there'd be 2 or 3 nice'd LAME processes stacked up and running at once. FreeBSD didn't care. It was also routing packets for the multi-link PPP dialup Internet connection at the house, rendering print jobs for a fickle Alps MD-1000 printer, and doing whatever else I tossed at it.)
I used 4front's OSS drivers to get there, which was amusing: IIRC, YMF724 support was an extra-cost item. And I was bothered by this because I'd already paid for it once, for Linux. I complained about that to nobody in particular on IRC, and some rando appeared, asked me what features I wanted for the FreeBSD driver, and they sent me a license file that just worked not two minutes later. "I know the hash they use," they said.
There's a few other memorable cards that I had at various points. I had a CT3670, which was an ISA SoundBlaster with an EMU 8k that had two 30-pin SIMM sockets on it for sample RAM.
There was the Zoltrix Nightingale, which was a CMI8738-based device that was $15 brand new (plus another $12 or something for the optional toslink breakout bracket). The analog bits sounded like crap and it had no bespoke synth or other wizardry, but it had bit-perfect digital IO and a pass-through mode that worked as an SCMS stripper. It was both a wonderful and very shitty sound card, notable mostly because of this contrast.
I've got an Audigy 2 ZS here. I think that may represent the pinnacle of the EMU10k1/10k2 era. (And I'm not an avid gear hoarder, so while I may elect to keep that around forever, it's also likely to be the very last sound card I'll ever own.)
And these days, of course, things are different -- but they're also the same. On my desk at home is a Biamp Tesira. It's a fairly serious rackmount DSP that's meant for conference rooms and convention centers and such, with a dozen balanced inputs and 8 balanced outputs, and this one also has Dante for networked audio. It's got a USB port on it that shows up in Linux as a 2-channel sound card. In practice, it just does the same things that I used the K6-2/EMU10k1/kX machine for: An active crossover, some EQ, and whatever weird DSP creations I feel like doodling up.
But it can do some neat stuff, like: This stereo doesn't have a loudness control, and I decided that it should have something like that. So I had the bot help write a Python script that watches the hardware volume control that I've attached and assigned, computes Fletcher-Munson/ISO 226 equal-loudness curves, and shoves the results into an EQ block in a fashion that is as real-time as the Tesira's rather slow IP control channel will allow.
Holy cow. Again, kudos for the details. This has been a fantastic digression so far lol.
So I do strongly remember Sound Blaster cards, specifically of the SB16 variety, being jokingly referred to as “Noise Blasters” for quite some time, due to the horrible noise floor they had as well as all the hiss. One of the reasons I loved the AWE64 Gold was because Creative did manage to get that well under control by that point, along with other fixes introduced with DSP 4.16. I still have an AWE64 Gold in my collection, complete with the SPDIF bracket, that I will never sell, due to sentimental reasons.
The YMF724 card you mentioned… did that happen to have coaxial SPDIF perchance? I heard that, unlike the SPDIF implementation found on the AWE series cards from Creative, the YMF724 SPDIF carried all audio over it, even under DOS. Not just 44.1 kHz specific sound, which I believe Creative sourced from the EMU8k. Plus, as an added bonus, if your motherboard offered SBLINK (also known as PC/PCI), you could interface with the PCI sound card interrupts directly in DOS without memory-hogging TSRs.
As for my final sound card I ever owned before abandoning them, mine was the rather unique ESI Juli@ back in the 2011/2012 timeframe. I loved how the audio ports had a zany breakout cable for MIDI and RCA features, as well as the board that could flip around for different style jacks.
One other remark that leads to a question. Linux users back in the day had a penchant for choosing one audio API over the other in Linux, like ALSA, OSS, or PulseAudio. Did you play around much with these in the dog days of Linux?
For the YMF724: I really don't remember that part of it, but I'd like to think that if it had SPDIF built out that I really would have paid attention to that detail. The only reason I went through the horrors of using the cheap-at-every-expense CMI8738 Zoltrix card was to get SPDIF to feed an external DAC (and finally live in silence), and if the YMF724 I had included it then my memories would be shaped differently. :)
And I'm usually pretty good with model numbers, but it's possible that this card really didn't have one. Back then, I got a lot of hardware from an amazing shop that sold things in literal white boxes -- stuff that they'd buy in bulk from Taiwan or wherever and stock on the shelves in simple white boxes with a card (in a static bag) inside. No book, no driver disk.
These boxes had a description literally pasted onto them; sometimes black-and-white, and sometimes copied on one of those fancy new color copiers, sometimes with jumper settings if appropriate -- and sometimes without. Some of the parts were name-brand (I bought a Diamond SpeedStar V330 from there with its minty nVidia Riva128 -- that one had a color label), but other times they were approximately as generic as anything could ever be.
Or, I'd pick up stuff even cheaper from the Dayton Hamvention. There were huge quantities of astoundingly-cheap computer parts of questionable origin moving through that show.
But no, no SPDIF on that device that I recall. It may have been on the board as a JST or something, but if it was then I absolutely never used it.
I do remember that bit about the EMU8k's SPDIF output -- my CT3670 had that, too. IIRC it was TTL-level and not galvanically-isolated or protected in any way, on a 2-pin 0.1" header. IIRC, it didn't even have the 75 Ohm terminating resistor that should have been there. I was disappointed by the fact that it only output audio data from the EMU8k, since that part didn't handle PCM audio.
But! There was a software project way back then that abused the EMU8k to do it anyway: Load up the sample RAM with some PCM, and play it. Repeat over and over again with just the right timing (loading samples in advance, and clearing the ones that have been used), give it a device name, and bingo-bango: A person can play a high-latency MP3 over SPDIF on their SoundBlaster AWE-equivalent.
I was never able to make it work, but I sure did admire the hack value. :)
That ESI Juli@ is an a very clever bit of kit and I've not ever seen one before. I'm a bit in awe of the flexibility of it; the flip-card business is brilliant. There's got to be applications where that kind of thing could be used in the resurgent analog synth world.
It's very different, but for some reason it reminds me of the Lexicon Core 2 we used in a studio from 1999 until 2002 or so. This had its (sadly unbalanced) 4 inputs and 8 outputs on an external breakout box, and we gave it another 8 channels in each direction by plugging it into an ADAT machine. That was an odd configuration, and bouncing through the hardware reverb on the card was even odder.
The Core 2 did not work with the then-cutting-edge Athlon box we built for it and that was a real bummer -- we spent a lot of money on that rig, and I spent a ton of time troubleshooting it before giving up. (We then spent a lot more money replacing that board with a slotted Pentium 3.)
ALSA, OSS, PulseAudio: Yeah, all of those. I paid for OSS fairly early on, and that was also always very simple to make work -- and it did work great as long as a person only did one thing at a time. I really enjoyed the flexibility of ALSA -- it let me plug in things like software mixers, so I could hear a "ding" while I was playing an MP3. And I liked the network transparency of PulseAudio ("it's kind of like X, but for sound!") but nobody else really seemed interested in that aspect around that time.
If I had to pick just one as a favorite, it would definitely be OSS: The concept of one sound card with exactly one program that completely owned the hardware until it was done with it allowed for some very precise dealings, just like with MS-DOS. It felt familiar, plain, and robust.
So with regards to the YMF724... what you describe about how spartan the offering was to you doesn't surprise me in the least. That specific chip was pretty much offered to OEMs very cheaply by Yamaha, and a ton of Chinese card makers (mostly of the generic variety) snapped them up and implemented the chip however they saw fit. As for Yamaha, they apparently did produce their own branded soundcard based on a later revision of the YMF724 chip called the WaveForce WF-192XG.
There is some guy on Youtube that reviewed his specific generic YMF724-based sound card many many years ago that did have SPDIF on board and he seemed quite fond of it. Though later on, he redacted his initial recommendation due to how hard it is to find a YMF724 card built exactly like his was. So to be honest, it's likely you didn't miss out after all.
Regarding the SPDIF implementation on the Creative AWE-series cards, those SPDIF brackets turned out to be very important due to their proper support for the TTL-level signal, yet back in the day, most users discarded it or lost them in the shuffle, making them exceedingly rare in the long run. If you are out shopping for any AWE32 or AWE64, good luck finding one with the bracket! Frankly, I wish Creative just slapped the coaxial SPDIF port on the card itself rather than on a separate bracket, though I suppose Creative wanted the world to know how they half-assed the implementation anyway. I digress. :)
That ESI Juli@ was the most interesting sound card I ever used, and I've not seen one like it since. I even recall ensuring that it had a dedicated PCI slot for it that did not connect over a PCI-to-PCIe bridge, in case that would introduce latency. Hilariously though, my needs were never particularly stringent. I was just being OCD.
For me? As far as Linux audio subsystems go, I preferred the ease of PulseAudio, even if it was rather buggy in its earlier days. I even played around with JACK many years later, but would go right back to PulseAudio, since it's truly set-it-and-forget-it in the current era... or I guess we default to pipewire now? I kinda stopped paying attention since audio is so seamless now.
That's a ton of good information about the YMF724.
You know, I don't think I ever played with the synth on that card at all. I was even a bit surprised earlier: I was trying to jog my memory about what the card looked like, and found the XG branding again (a quarter of a century after ignoring it the first time) and said to myself "Oh! That. I probably should have played around with that."
But MIDI music was never really my thing. I'm not much of a musician, and I found myself enjoying downloaded mod/669/s3m a lot more than any of the various notation-only formats. The games with revered synth work just never really crossed my radar. By the time I got into shooters, CD-ROM was definitely a well-entrenched concept -- along with PCM soundtracks. I still have the shareware Quake CD that I bought from a local record shop (which they only stocked because Trent Reznor did the soundtrack -- software wasn't their thing at all).
In the 1990s, I really hoped that SPDIF would become a common audio interface, with preamps and receivers and source devices (like computer sound cards) using it for IO. One cable. Perfect signal integrity using digital audio and affordable fiber optics -- at home! In a marketplace that was driven by buzzwords, it could have been a huge hit.
Instead: Even though it was common on things like MD, DCC, and unobtanium DAT gear, it was barely known amongst regular folks -- and receivers with digital IO didn't really become common until DVD.
But CRTs ruled during the peak DVD era, and many of those TVs had perfectly-adequate speakers for casual use that folks were content with. So the likelihood of them just happening to have two bits of kit in the same pile that could talk together with SPDIF was always very low, even then.
It seemed very much like a Catch-22: People didn't know about it because it was uncommon, and manufacturers didn't take it seriously because people didn't know enough about it to select gear that used it. It thus defaulted to remaining uncommon.
Its greatest market success seems to have been its utility in plugging a sound bar into a flat TV with terrible built-in speakers. Which is great, and all: It's a perfectly-cromulent use. But that started a decade or two too late.
I myself didn't own a CD player with an SPDIF output until 2012 or so, which is just bizarre in retrospect. What's even weirder is that it was an $8,000 Krell (that someone gave to me), and I found that I consistently preferred the sound of its internal DAC over that of anything else that I could connect it to digitally.... so I wound up never using the SPDIF outputs anyway. (But as experiences go, that's definitely in the realm of an outlier.)
Thanks. Of course, a quarter of a century or so after these went out of production, this isn’t exactly useful information, but fun nonetheless.
That’s the biggest issue with MIDI. No matter the equipment you had, you were never sure what the musician intended the composition to sound like, unless they explictly mentioned the exact synth used in the metadata, like a Yamaha XG synth or a Roland SoundCanvas. I really appreciated how compact the file sizes were, but I can definitely understand sticking with PCM formats off audio CD or even WAV/AIFF/MP3 back then, depending on the application.
So possible fun tidbit about SPDIF. Coaxial SPDIF, despite seeming more old school compared to its optical TOSLINK counterpart, could achieve higher bit depth and frequencies (sometimes up to 24-bit/192 kHz!!) whereas TOSLINK was officially limited to 16-bit/48 kHz, with manufacturers pushing as high as 24-bit/96 kHz off spec. Perfectly fine for your average music enjoyer of the time, but still an interesting limitation.
On mention of DAT and MD, those were two formats I would have loved to get into, if they weren’t so compromised due to RIAA shenanigans or too pricey. Such is life I suppose.
Yeah I’d say overall, I haven’t touched SPDIF in a long while myself. My current TV uses an eARC over HDMI soundbar setup and my PC connects using good old fashioned 3.5mm audio jacks.
One neat thing about specifications like toslink is how flexible they are -- or perhaps, how arbitrary they are.
At the core, both coaxial and toslink were just transport mediums for the same SPDIF bitstream. One used copper, and the other used bendy plastic fiber optics.
And yeah: Toslink was more-limited on bandwidth, by specification.
And one would think that this would be because the optics are not so good (they're definitely not so great), or something.
But then: Alesis showed up with ADAT, and ADAT's Lightpipe could send 8 channels of 24-bit 48KHz audio over one bog-standard Toslink.
They used different encoding, of course. Even at a very low level, rather than detecting a rising edge 1 and a falling edge as 0, it detected any edge as 1 and a lack of an edge as 0. This did let them pack a lot more bits in.
But in doing that (and whatever else they did), they multiplied the functional bandwidth of a lowly Toslink cable by a factor of about 6 -- using the same optical components at each end that Toshiba sold, and the same Toslink cable from the big box store.
I think we've beaten sound cards and SPDIF to death here. :)
It's been fun. Perhaps we can do this again some day.
I certainly have no delusions of Nvidia going bankrupt. In fact, they will certainly make it to the other side without much issue. That said, I do foresee Nvidia taking a reputational hit, with AMD and (possibly) Intel gaining more mindshare among consumers.
When you have a CEO like Elon who swears up and down that you only need cameras for autonomous driving vehicles, and skimping out on crucial extras like Li-DAR, can anyone be surprised by this result? Tesla also likes to take the motto of "move fast and break things" to a fault.
I just find it distracting to pretend like we know exactly what albatross to hang around the neck of the problem here. While I do tend to think lidar is probably useful, I also think this isn't a solved shut knowable case where the lidar absolutely is essential & makes all the difference. Making assertions like this rests more assurity than I think can be granted, and harms the overall idea: that Tesla doesn't seem to have serious proof that their things are getting better, that they are more trustworthy.
The data is just not there for us outsiders to make any kind of case, and thats the skimping out crucial baseline we need.
I'm not surprised, more because there was info on Reddit that Tesla FSDs were having disengagements every 200 miles or some such in urban environments. Camera only probably could work in the future but seemingly not yet.
It's easy to argue that LIDAR is expensive and unnecessary, but radar has been a standard for luxury cruise control for decades so it's got a variety of OEM suppliers. Thus Tesla's lack of radar because of the CEO's ego is damnable. The problem with camera-only is fog. My human eyes don't penetrate fog. Radar does. Proving that camera-only is possible is stupid and ego driven and doesn't come from a place of merit and science and technology.
Musk's success story is taking very bold bets almost flippantly. These things have a premium associates with them, because to most people they are so toxic that they would never consider them.
Every time when he has the choice to do something conservative or bold, he goes for the latter, and so long as he has a bit of luck, that is very much a winning strategy. To most people, I guess the stress of always betting everything on red would be unbearable. I mean, the guy got a $300m cash payout in 1999! Hands up who would keep working 100 hour weeks for 26 years after that.
I'm not saying it is either bad or good. He clearly did well out of it for himself financially. But I guess the whole cameras/lidar thing is similar. Because it's big, bold, from the outset unlikely to work, and it's a massive "fake it till you make it" thing.
But if he can crack it, again I guess he hits the jackpot. Never mind cars, they are expensive enough that Lidar cost is a rounding error. But if he can then stick 3d vision into any old cheap cameras, surely that is worth a lot. In fact wasn't this part of Tesla's great vision - to diversify away from cars and into robots etc. I'm sure the military would order thousands and millions of cheapo cameras that work 90% as well as a fancy Lidar - while being fully solid state etc.
That he is using his clients as lab rats for it is yet another reason why I'm not buying one. But to me this is totally in character for Musk.
He's a complicated figure. He has done so much good as well. EVs in the US and reusable rockets owe a lot to him. OTOH, so does the cesspool that is X.
Bill Gates is still kickin'. There are credible independent estimates that his funding has saved tens of millions of lives that would've been lost to malaria, AIDS, and other diseases.
Effective altruism and other New Age garbage pseudo philosophy can't hold a candle to that.
And he's retired, so the money is no longer useful to him for maintaining control over the company he runs or expanding it, which is when it traditionally starts going to charities.
In my opinion, one of the things that most reveals a person's biases and worldview is which tech oligarchs they revere and which they loathe
To reveal my own bias / worldview, I loathe and detest Bill Gates in nearly every way and have done so for over three decades. I think he has had a massively negative impact on humanity, mainly by making the computer industry so shitty for 4+ decades but in other more controversial areas as well.
With Elon Musk, while perceiving a number of big faults in the man, I also acknowledge that he has helped advance some very beneficial technologies (like electric vehicles and battery storage). So I have a mixed opinion on him, while with Gates, he is almost all evil and has had a massive negative impact on the planet.
I'm conflicted on this one. Famously, Tesla's main revenue source for ages was selling green credits to other car makers. Presumably, if not for Tesla, these car makers would have had to do something else.
The way I see it, he converted his cars' greenness into other people's fumes. So not a net win after all.
It rather reminds me of that Musk was obsessed with converting Paypal to run on Windows servers instead of Linux, and that he therefore finally got ousted by the other CEOs. Because he already had a big share in the company he made a lot of money. But he doesn't seem to be a clever engineer.
Completely self driving? Don't they go into a panic mode, stop the vehicle, then call back to a central location where a human driver can take remote control of the vehicle?
They've been seen doing this at crime scenes and in the middle of police traffic stops. That speaks volumes too.
Incorrect humans never take over the controls. An operator is presented with a set of options and they choose one, which the car then performs. The human is never in direct control of the vehicle. If this process fails then they send a physical human to drive the car.
> presented with a set of options and they choose one
> they send a physical human to drive the car.
Those all sound like "controls" to me.
"Fleet response can influence the Waymo Driver's path, whether indirectly through indicating lane closures, explicitly requesting the AV use a particular lane, or, in the most complex scenarios, explicitly proposing a path for the vehicle to consider. "
So they built new controls that typical vehicles don't have. Then they use them. I fail to see how any of this is "incorrect." It is, in fact, _built in_ to the system from the ground up.
Semantic games aside, it is obviously more incorrect to call them "completely self driving" especially when they "ask for help." Do human drivers do this while driving?
I don't know what you're trying to prove here. Stopping safely and waiting for human input in edge cases is fine (Waymo). Crashing into things is not fine (Tesla).
and there is a linked article about Waymo's data reporting, which is much more granular and detailed, whereas Tesla's is lumpy and redacted. Anyway, Waymo's data with more than 100M miles of self-supervised driving shows a 91% reduction in accidents vs humans. Tesla's is 10x the human accident rate according to the Austin data.
The more I've looked into the topic the less I think the removal of lidar was a cost issue. I think there are a lot of benefits to simplifying your sensor tech stack, and while I won't pretend to know the best solution removing things like lidar and ultrasonic sensors seem to have been a decision about improving performance. By doubling down on cameras your technical team can remain focused on certain sensor technology, and you don't have to deal with data priority and trust in the same way you do when you have a variety of sensors.
The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.
Having multiple sources of data is a benefit, not a problem. Entire signal processing and engineering domains exist to take advantage of this. Even the humble Kalyan filter lets you combine multiple noisy sources to get a more accurate result than would be possible if using any one source.
What I've heard out of Elon and engineers on the team is that some of these variations of sensors create ambiguity, especially around faults. So if you have a camera and a radar sensor and they're providing conflicting information, it's much harder to tell which is correct compared to just having redundant camera sensors.
I will also add in my personal experience, while some filters work best together (like imu/gnss), we usually either used lidar or camera, not both. Part of the reason was combining them started requiring a lot more overhead and cross-sensor experts, and it took away from the actual problems we were trying to solve. While I suppose one could argue this is a cost issue (just hire more engineers!) I do think there's value in simplifying your tech stack whenever possible. The fewer independent parts you have the faster you can move and the more people can become an expert on one thing
Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.
Fanboy or not, we don't know how much Waymo's model relies on an army of contractors labeling every stop light, sign, and trash can that, sure, they're using LIDAR to detect them and not cameras. We also don't know much about Tesla's Robotaxi initiative and how much human help they're relying on either.
first of all, their approach does not rely on map level labeling for runtime. They do that for training, but so does every other player. High precision maps are used as a form of GPS in their solution. They also use it to determine delta if road conditions change and alert ops.
Second of all, they’re using cameras to detect things like signs and trash cans! I don’t know where this misconception came from that Waymo only uses lidar, but it demonstrates a lack of familiarity with the problem space and the solutions
i cant edit this but it's bugging me "does not rely on map level labeling for runtime" isn't quite true. they do use tagging in their centralized mapping that the waymo driver uses as one source of truth. but ultimately, the onboard sensors and the onboard classifier and pathing engine are used.
this allows them to tag things that their cars are not recognizing, but then are fed back into the training set and validated. it allows them to "live patch" troublesome sections of road but it's not intended for long term use.
they also use this to close sections of road, such as for car accidents, flooding, etc.
From my previous comment, in case you didn't see it
> Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.
In our case if we're spending a lot of time on something that doesn't improve the product, it just takes away from the product. Like if we put 800 engineering hours into sensor fusion and lidar when the end product doesn't become materially better, we could have placed those 800 hours towards something else which makes the end product better.
It's not that we ran into problems, it's that the tech didn't deliver what we hoped when we could have used the time to build something better.
Kalman filters and more advanced aggregators add non-trivial latency. So even if one does not care about cost, there can be a drawback from having an extra sensor.
Cars and roads are built for human reaction times. That's why you have low speed limits in urban areas. You can have a pile of latencies attributable to processing a scene and still have superhuman reaction time that contributes to outperforming humans.
It's analogous to communications latency. High latencies are very annoying to humans, but below a threshold they stop mattering.
To tell what? Waymo is easily 5 years ahead of the tech alone, let alone the roll out of autonomous service. They may eventually catch up but they are definitely behind.
This is a solved problem. Many people I know including myself use Waymo’s on a weekly basis. They are rock solid. Waymo has pretty unequivocally solved the problem. There is no wait and see.
I mean if waymo had unequivocally solved the problem the country would be covered in them, and the only limit to their expansion would be how many cars they can produce. Currently they're limited by how quickly they can train on new areas, which is likely slowed by the fact they're using over 20 sensors across four different types. On the other hand, Tesla could spread across the country tomorrow if they were reliable enough. I would think solving autonomous driving would imply you could go nation wide with your product
Right, nobody has solved it. If either company had solved self driving, it would be in basically every US city. While it is my opinion that Waymo is further ahead, no company has solved the problem yet and because of that it's still not clear what the best solution will be.
Sure, but Tesla is already losing the race. They were ahead a few years ago, but not anymore. They bet in getting autonomous driving done with cameras only that are cheap and have a simple and will understood tech stack and ecosystem.
It didn't work out though and now multi sensor systems are eating their lunch.
Honestly I think it's more that he was backed into a corner. The Teslas from ~9 years ago when they first started selling "full self driving" as an option, had some OK cameras and, by modern standards, a very crappy radar.
The radar they had really couldn't detect stationary objects. It relied on the doppler effect to look for moving objects. That would work most of the time, but sometimes there would be a stationary object in the road, and then the computer vision system would have to make a decision, and unfortunately in unusual situations like a firetruck parked at an angle to block off a crash site, the Tesla would plow into the firetruck.
Given that the radar couldn't really ever be reliable enough to create a self driving vehicle, after he hired Karpathy, Elon became convinced that the only way to meet the promise was to just ignore the radar and get the computer vision up to enough reliability to do FSD. By Tesla's own admission now, the hardware on those 2016+ vehicles is not adequate to do the job.
All of that is to say that IMO Elon's primary reason for his opinions about Lidar are simply because those older cars didn't have one, and he had promised to deliver FSD on that hardware, and therefore it couldn't be necessary, or he'd go broke paying out lawsuits. We will see what happens with the lawsuits.
Usually you would go in with the max amount of sensors and data, make it work and then see what can be left out. It seems dumb to limit yourself from beginning if you don’t know yet what really works. But then I am not a multi billionaire so what do i know?
Well we know that vision works based on human experience. So few years ago it was a reasonable bet that cameras alone could solve this. The problem with Tesla is that they still continue to insist on that after it became apparent that vision alone with the current tech and machine learning does not work. They even do not want to use a radar again even if the radar does not cost much and is very beneficial for safety.
> Well we know that vision works based on human experience.
Actually, we know that vision alone doesn't work.
Sun glare. Fog. Whiteouts. Intense downpours. All of them cause humans to get into accidents, and electronic cameras aren't even as good as human eyes due to dynamic range limitations.
Dead reckoning with GPS and maps are a huge advantage that autonomous cars have over humans. No matter what the conditions are, autonomous cars know where the car is and where the road is. No sliding off the road because you missed a turn.
Being able to control and sense the electric motors at each wheel is a big advantage over "driving feel" from the steering wheel and your inbuilt sense of acceleration.
Radar/lidar is just all upside above and beyond what humans can do.
Human vision is terrible in conditions like fog, rain, snow, darkness and many others. Other sensor types would do much better there. They should have known that a long time ago.
>The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.
Price is a factor. I’ve been using the free self driving promo month on my model Y (hardware version 4), and it’s pretty nice 99% of the time.
I wouldn’t pay for it, but I can see a person with more limited faculties, perhaps due to age, finding it worthwhile. And it is available now in a $40k vehicle.
It’s not full self driving, and Waymo is obviously technologically better, but I don’t think anyone is beating Tesla’s utility to price ratio right now.
Seems to me rather that Teslas are self driving cars with a handicap; they are missing some easily obtainable data because they lack the sensors. Because their CEO is so hard headed.
Simplifying things doesn't always make things easier.
I do see 2.5 GbE NICs really making huge headway just within the past few years alone. Even lower end AMD and Intel motherboards are including 2.5 GbE ethernet ports by default these days, and that standard has the advantage of using the same copper RJ45 cabling, while giving you a theoretical 2.5x gain in speeds. For many users, even including myself, 2.5 GbE is a fantastic leap forward without having to dump money into fancier gear in order to properly take advantage of something faster like 10 GbE networking.