The excuse is that this firmware version is not available to the public. Yet Elon is driving in public with other drivers putting everyone at risk of death. At what point are we going to stop this lunacy?
I'm one of the more vocal Tesla critics on here, but I think we can do better than this example.
The left turn arrow turned green, the car inched forward a little, and Elon immediately put on the brakes. The situation was no more dangerous than when a human driver mistakes which light is theirs and does the same thing, which happens pretty regularly in my experience.
And that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing than I am to a public beta in the hands of people who believe Tesla's (really egregious) marketing.
It felt like more than a few inches. (I'm not splitting hairs here, I really do feel it was qualitatively significantly different from your description, which made it sound like the car was cautiously starting to move just a bit.)
> and Elon immediately put on the brakes. The situation was no more dangerous than when a human driver mistakes which light is theirs and does the same thing, which happens pretty regularly in my experience.
When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
> that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing
This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.
> When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it.
For me this is the important part. Too many people already drive far too distracted. Imagine giving many of those same distracted drivers a great reason to ignore what they're doing even more. Very frequently when I'm stopped at a traffic light I'll see the person next to me on their phone waiting for the light. Would we trust that someone doing that would lookup when their car starts driving into the intersection?
I really do want self driving cars to be a thing but I wish we were going about it a bit differently and I wish other people who wanted it didn't play things down as much as they do.
> For me this is the important part. Too many people already drive far too distracted. Imagine giving many of those same distracted drivers a great reason to ignore what they're doing even more. Very frequently when I'm stopped at a traffic light I'll see the person next to me on their phone waiting for the light. Would we trust that someone doing that would lookup when their car starts driving into the intersection?
Isn't this precisely why self-driving (even if it isn't perfect) could actually make the roads safer. People, as you say, are already super distracted (partly because driving is boring).
I am one of the believers that self driving cars will one day make roads safer. I just think that the technology we have now that keeps getting called "self driving" is not there yet. If it requires the driver's full attention to keep things safe but also makes individual drivers feel like they can be a little less attentive then we don't really have a very safe situation on our hands.
When I say that the technology makes people feel like they can be less attentive I really do mean it. There was the SF tech worker who was playing candy crush or something when his Tesla smashed into a barrier on the highway. I have friends who own Teslas and talk frequently about how they like taking them on road trips because they can relax a bit more and pay less attention to the flow of traffic (it'll brake for you!!! they say). In a world where these cars have to share the road with human drivers and drive on roads that are under construction or in poor weather conditions I just don't see how we can say this is safe.
Top comment: "The cabin camera really does feel super solid at detecting when I’m distracted. Even if I’m just like searching a song on the infotainment it will get onto me which is annoying but I completely understand and am glad that it works so well ..."
First, let's assume that perfect law-abiding, self-driving cars exist. One one hand they would eliminate incidents caused by inattentive driving, on the other hand they would in fact create incidents where attentive driver in the right of way would yield for an inattentive driver. Change in total incidents would depend on proportion of these events. Anecdotal evidence is anecdotal, but in my own experience number of incidents I have avoided simply yielding when having right of way is much higher then the number of incidents I have gotten into due to my own mistake.
Second, actual "self driving" cars are far from that, especially in their interaction with other drivers.
Third, there second order effects. E.g. a car quickly, unexpectedly maneuvering could cause other car to break sharp, which could end up in an accident where the original car is not even involved. With an increase of cars behaving differently than the local custom such accidents are bound to happen.
Most probably we are going to see an increase in the number of accidents with proliferation of semi-autonomous vehicles before that number starts to dwindle.
> giving many of those same distracted drivers a great reason to ignore
the attentiveness monitoring and strike system? If they ignore it, they will quickly exhaust their strikes and get locked out for bad behavior until they learn to take it seriously.
It's hard to tell because the camera is at such a weird angle, but from what I can see the vehicle remains pretty firmly behind the stop line throughout the entire encounter. We can quibble about how fast it was accelerating, but I regularly see worse false starts at lights.
> When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
I'm actually more worried about the human case than the human+autonomous case. In the human case it is up to the entity that made the mistake to correct their own mistake. In the autonomous vehicle case you effectively have a second set of eyes as long as the driver is paying attention (which they should be and Musk was). This is why I say that it makes a difference that this was internal testing—the driver wasn't a dumb consumer trusting the vehicle, it was the CEO of the company who knew he was using in-progress software.
> This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.
Requiring that autonomous vehicles never be tested on a public road in real world conditions is another way of saying that you do not believe autonomous vehicles should ever exist. At some point they have to be tested in the real world, and they will make mistakes when they leave the controlled conditions of a closed course.
> remains pretty firmly behind the stop line throughout the entire encounter
That's not the same thing as "inched forward" is my point.
> In the human case it is up to the entity that made the mistake to correct their own mistake.
You're completely ignoring how likely these events are or how severe the errors are in the first place. You can't jus count the number of correction points and measure safety solely based on that.
> Requiring that autonomous vehicles never be tested on a public road
I never said that. (How did you get from "look at how it's done in medicine" to "this should never be done"?) What I do expect is responsible testing, which implies you don't test in production until at least you yourself are damn sure that you've done everything you possibly can otherwise. Given everything in the video I see no reason to believe that was the case here.
> Requiring that autonomous vehicles never be tested on a public road in real world conditions is another way of saying that you do not believe autonomous vehicles should ever exist.
Sure, but that's a wildly different case than "it's ready for public roads we super promise"
As I noted at the very beginning of my first comment, I am a huge critic of many things that Tesla does, and releasing their software in beta to casual drivers is something I'm strongly opposed to. All I'm saying here is that this specific critique of this specific video is misplaced.
One day I hope the New Drug Application process can have a monitoring and supervision system as sophisticated as FSD Beta.
Just imagine: Constant 100% always-on supervision, supervision of the supervisors with 3-strikes you're out attentiveness monitoring, automatic and manual reporting of possible anomalies with full system+surroundings snapshots to inform diagnostics and development, immediate feedback of these into the simulations that validate new versions, and staged rollout that starts at smaller N (driving simulators are actually pretty good) and continues intensive monitoring up to larger N. Even Phase 3 trials only involve thousands of people, while FSD beta is driving a million miles per day with monitoring that feels more like Phase 1 or mayyybe Phase 2.
One day drug development will be this sophisticated, and it will be glorious.
> When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
This is not nuanced enough. I've been in a cab to JFK in the snow and the driver was speeding around a turn that the car started sliding and eventually crashed into the side of the road.
Er, your comment is the one lacking nuance. There was no snow here, nor did I claim accidents never happen. I was trying to get across a point about the parent's argument.
Your point boils down to a "what if" though. If it's as dangerous as you make it, then you should be able to show plenty of examples where actual harm is happening. Showcase those.
Over 700 allegedly fatal crashes attributable to FSD [1] that Tesla has officially reported to the government over a estimated 400M miles on FSD. Making the driver 150x more likely to be involved in a fatal crash than if they were driving on their own.
Note that these are based on auditable published statistics and are likely a overestimate of the risk as we must assume the worst when doing safety-critical analysis. Tesla could improve these numbers by not deliberately suppressing incident reports and by not deliberately choosing not investigate to avoid confirming fatality reports. But, until they do so we need to err on the side of caution and the consumer instead of the for-profit corporation.
> And that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing than I am to a public beta in the hands of people who believe Tesla's (really egregious) marketing.
There is absolutely no reason you should assume "testers" know what they are doing. I have met plenty of people with decades of experience in "testing" barely know what they are doing. Even in the case they know what they are doing, they shouldn't be testing a *deadly* vehicle with potentially broken software on heavily populated *public* roads.
> Even in the case they know what they are doing, they shouldn't be testing a deadly vehicle with potentially broken software on heavily populated public roads.
This is another way of saying that self-driving vehicles shouldn't exist at all. At some point we have to test them on public roads, preferably before putting the software into the hands of regular users. If you ban even internal company testing, then what you're saying is that self-driving vehicles should never exist.
Not even remotely the same thing. You completely glossed over me saying potentially broken software and heavily populated. Surely there is a way to simulate a left turn signal in a more safe manner on software this early in the testing process.
All software is potentially broken. If you've only ever tested your software in a controlled driving range, then you don't know how it will behave when you take it out into the real world. If you've only ever tested it on lightly populated roads, then you don't know how it will behave when you take it onto heavily populated roads.
It's not surprising that it made a mistake during testing. That's what testing is for: rooting out mistakes.
No. Public road deployment is for validation, not testing.
Testing is for rooting out errors. Validation is for proving you achieve the desired specifications and failure rate. Validation occurs when you believe you are done or when testing becomes inadequate to discover failures. It is about “proving” the negative with respect to errors.
Broken, incomplete software where defects can be routinely discovered after light usage has no place being used by consumers on public roads.
"potentially broken software" = "all software", therefore you said self driving software should never exist and you are happy to support 40,000+ people dying in the US every year who could be saved with autonomous vehicle technology. Your lack of ethics is worrying.
> This is another way of saying that self-driving vehicles shouldn't exist at all.
It's another way of saying that self-driving vehicles should be invented somewhere else, so that in 10 years you have to beg and plead for an overpriced second-rate implementation with a worse safety profile.
Except for the part where they can not even detect trivial cases. One of the easiest possible cases, a giant teddy bear in the seat “holding” the wheel (by attaching a simple weight to the wheel), is determined to be a attentive driver [1]. A T-ball and they strike out.
The inability to robustly solve simple, obvious cases and not failing safe is indicative of a sloppy development and validation process that is incompatible with the deployment of a safe safety-critical device.
When cars first debuted, they were incredibly dangerous and killed lots of people. Yet we persisted.
Airplanes, too.
Fully autonomous cars are on the streets of San Francisco. They run into trouble, yet the experiment persists.
We should allow for civil suits, especially when negligence is found, but the only way to progress forward is to make real world attempts.
The promise of autonomous driving is worth trillions of dollars for all of the opportunity it will unlock. And as macabre as it may be, there will continue to be deaths along that development path.
I wouldn't really say “we persisted” in the case of cars, as much as the auto industry threw boatloads of money at lobbying to make people shift the blame from cars to their victims. Cars didn't get safer (for the people outside them), we just started blaming the people they killed instead.
If we allow criminal suits when drivers kill someone due to carelessness we should allow criminal suits when companies kill people due to carelessness. Companies shouldn’t be held to lower standards than individuals.
> The promise of autonomous driving is worth trillions of dollars for all of the opportunity it will unlock. And as macabre as it may be, there will continue to be deaths along that development path.
“We should kill people because it will unlock market opportunity”, is why we need to start charging the people in charge of these companies with murder when they inevitably do just that. I don’t care about Tesla stock, I care about not being killed by one of Musk’s “hype” projects.
Cars were not "incredibly dangerous" when they first debuted because they topped out at about 40 mph (and went considerably slower most of the time). That's pretty comparable to a horse and carraige.
Yes they were. Besides the speed aspect, there were other things too.
For example, the hand crank was known to be fatal and pushed the founder of Cadillac, Henry Leland, to get it rid of it when it his friend was killed after helping a lady on the side of road.
Yeah, the danger posed by cars was initially limited because they had to share the road with lots of slower traffic (carts, bicycles, pedestrians etc. etc.). Only forbidding pedestrians from using the same space as cars enabled cars to become dangerous in the first place...
Also, horses were incredibly dangerous. In NYC in 1900 for example the pedestrian fatality rate from horses was higher than the NYC 2003 pedestrian fatality rate from cars [1].
Similarly in England and Wales deaths from horse-drawn vehicles were around 70 per 1 million people per year in the early 1900s. That's in the same ballpark as motor vehicle deaths in the 1980s and '90s (80-100 per 1 million people per year) [2].
(It should be noted that medical technology is better now. A lot of those 1900 fatalities would probably have been preventable if they had current medical technology).
A lot of people on HN seem to view the horse era as some sort of idyllic safe, quiet, and unobtrusive pedestrian and rider paradise. It was not. From that second article:
> It is easy to imagine that a hundred years ago, when cars were first
appearing on our roads, they replaced previously peaceful, gentle and safe
forms of travel. In fact, motor vehicles were welcomed as the answer to a
desperate state of affairs. In 1900 it was calculated that in England and
Wales there were around 100,000 horse drawn public passenger vehicles,
half a million trade vehicles and about half a million private carriages.
Towns in England had to cope with over 100 million tons of horse droppings
a year (much of it was dumped at night in the slums) and countless gallons
of urine. Men wore spats and women favoured outdoor ankle-length coats
not out of a sense of fashion but because of the splash of liquified
manure; and it was so noisy that straw had to be put down outside
hospitals to muffle the clatter of horses’ hooves. Worst of all, with
horses and carriages locked in immovable traffic jams, transport was
grinding to a halt in London and other cities.
and
> Motor vehicles were welcomed because they were faster, safer,
unlikely to swerve or bolt, better able brake in an emergency, and took up
less room: a single large lorry could pull a load that would take several
teams of horses and wagons – and do so without producing any dung. By
World War One industry had become dependent on lorries, traffic cruised
freely down Oxford Street and Piccadilly, specialists parked their
expensive cars ouside their houses in Harley and Wimpole Street, and the
lives of general practitioners were transformed. By using even the
cheapest of cars doctors no longer had to wake the stable lad and harness
the horse to attend a night call. Instead it was ‘one pull of the handle
and they were off’. Further, general practitioners could visit nearly
twice as many patients in a day than they could in the days of the horse
and trap.
I am much less opposed to internal company testers who know what they're doing than I am to a public beta in the hands of people who believe Tesla's (really egregious) marketing
Which category do you think Elon Musk is more likely to belong to?
I still don't understand how some people think it's reasonable that car malfunctions are hand waved away as "it's a beta." We're not talking about an Early Access game on Steam here. We're talking about death machines that weight thousands of pounds.
Beta software shouldn't be on the road where other people can be impacted by it. Want to run into the back of a fire truck? Fine by me. Just do it on a track (and record it). I'll laugh from my computer chair rather than being potentially exposed to it on the road.
100%. It feels as if some people can't turn off that SDE switch in their brain. If you have a bug in your microservice, fix it and deploy a new container. That doesn't work with actual bodies.
Does Steam Early Access require an always-on camera that tracks your eye movement, warns if you look away, and locks you out if you ignore the warnings?
Teslas monitoring system is about as effective as Oceangate’s continuous monitoring system.
From one of the posts above. The monitoring system could not detect if a person was in the seat or not, and would autonomously drive with a giant teddy bear, a giant unicorn and a completely empty seat and repeatedly hit a moving object the size of a child.
>Except for the part where they can not even detect trivial cases. One of the easiest possible cases, a giant teddy bear in the seat “holding” the wheel (by attaching a simple weight to the wheel), is determined to be a attentive driver [1]. A T-ball and they strike out.
The inability to robustly solve simple, obvious cases and not failing safe is indicative of a sloppy development and validation process that is incompatible with the deployment of a safe safety-critical device.
[1] https://www.youtube.com/watch?v=CPMoLmQgxTw
A bypass that requires completely vacating the driver seat is probably not easy enough to be widely exploited. If you have evidence to the contrary, please do tell, but while we are exchanging memes, let's review the Green Hills Software standard of excellence, just to contextualize the shade they throw: https://www.youtube.com/shorts/UdSVMgay0MI
People deliberately bypassing the system doesn’t mean it’s not effective. Seatbelts are also pretty effective unless you don’t put them on and then put some duct tape across your chest to fool police officers.
If it's being sold on the open market it's moved beyond the "testing" phase and the executives should be fully legally and financially culpable for the damages caused by their untested yet heavily marketed product.
I'm talking about FSD in general, but maybe I should broaden my statement: If it's being used on public roads with other drivers, it's moved beyond "testing" and into liability territory. None of the people on the road with Musk consented to being part of his life threatening experiment.
They also don't consent to dangerous behavior from other drivers. The world is messy and the article is about a trial to judge exactly what you're talking about.
Stoplights are always red colored circles of LEDs. It's by far the easiest thing SDCs need to be able to do.
Any version of SDC software that has a stoplight bug is proof that the QC process of that software is broken and that the makers don't care about safety.
In the aforementioned video[1], the car seems to detect 1 out of 2 stoplights, both of which are clearly visible from the camera. They're illuminated red and not too glossy to detect.
Most Teslas have multiple cameras, so the viewing angle shouldn't matter as much to the vehicle as it does to a human.
Do you have anything of substance to say, or are you only capable of childish, condescending meme speak?
Do you have examples of stoplights that aren't red circles? Or do you have examples of when a Tesla approaching a glowing red circle should not recognize it a stoplight?
I built a CV system in 2013 that detected vehicle makes/models in varying weather and time of day. It was a lot harder than stoplights, we had a team of one, and it was with technology that is ancient by today's standards. Tesla has no excuse.
You can't test it in a simulator because half of what you're testing is the interaction with the sensors. You can't simulate sensors with high enough fidelity to be meaningful.
And this is more like a staging environment than it is production—production is when you put it in the hands of customers, this is an internal release available only to employees.
I'm firmly opposed to FSD beta, but this video doesn't concern me at all.
> One way we ensure statistical realism in our simulated world is by creating realistic conditions for our autonomous driving technology to experience. So, if the Waymo Driver is going through spring showers in Detroit at sunset in SimulationCity, we can recreate raindrops on our sensors and even simulate other minute details such as the dimming light and solar glare.
If they can simulate rain drops, it's definitely possible to simulate traffic lights. They are a basic scenario for a self driving car. What this video highlights is the extreme lack of validation culture within Tesla FSD.
My point isn't that you can't simulate it at all, my point is that at some point you have to take it out of the simulation and it will behave differently in the real world than it did in the simulated one, because your simulation will never be good enough to account for everything.
We can't expect people to never test self-driving cars on public, populated roads unless we are just acknowledging that we don't believe self-driving cars should be a thing, because I sure as hell am not comfortable with the idea of cars going straight from simulation to production.
> I sure as hell am not comfortable with the idea of cars going straight from simulation to production.
This is what you're advocating for by saying people should test these things on public roads. That is production, none of the other drivers have consented to be part of the "test".
If your simulation and regression testing is so bad that it can not be used to reliably validate 1 in 10,000 mile failure modes, then you are not ready to validate on public streets with untrained consumers.
Tesla FSD routinely fails at 1 in 100 mile safety-critical failure modes. The simulation does not need to be perfect, it just needs to be 100x better than what they have and FSD needs to be 100x better before they should be allowed to inflict it on general consumers.
If they want to test with trained safety drivers in a robust and well-controlled real world environment to identify failure modes that need to be addressed, that is fine. But non-criminal usage by consumers is at least a factor of 100x away.
FSD makes basic errors, the kind that can easily be identified in simulation rather than requiring 450k cars on the road. They don't have to go straight from simulation to production either. That's why all the serious players have closed course testing and safety driver supervised testing.
Has Tesla built an entire "staging town" where no unsuspecting public is allowed to enter in which they test their cars? No? Then I would definitely call it "testing in production"...
Even if you could perfectly simulate conditions for the sensors to pick up (incl. precipitation, light, dust) you cannot possibly simulate all possible scenarios that can happen in the real world.
Most software systems are not tested on "all possible scenarios".
Instead you run some basic must-pass scenarios, like the one in the video, where there is a red light in front of the car, and a left turn green, in clear conditions and daylight.
Are you suggesting Musk already exhausted testing and safety development opportunities to the point that the only alternative is a logically impossible simulation
That is only impossible if you have not meticulously specified operational domain. Once you have operational domain specified, you can enumerate each and every tiny little detail that can interact.
In safety critical systems design specifying operational domain is quite literally one of the first things that are done, even before any serious work on the design starts.
The only reasonable phase where safety critical systems can be interacted with closely monitored public are acceptance testing that either validates the model or drags the team back at drawing board.
Stop what lunacy? You realize thousands upon thousands of Americans are driving drunk every day? Or drive watching TikTok videos on their phone. How many car accidents do you think happen every week because humans are drunk or distracted?
Elon had his hands on the steering wheel ready to make any intervention, this is hardly some crazy and dangerous testing going on. I would be far more worried about the average American driver than Elon testing the latest version of FSD.
I’ve had an incident where a crazy guy fully stopped his car on the I5 fast-lane to LA trying to brake test me before getting out of his car to threaten me. I’m far more scared of people than I am of software.
4. The video clearly shows no hands are on the wheel at various points when Elon lowers the camera in his hands or when making 90 degree turns so the side of the yoke is pointed up and visible with no hand on it. The video is literally direct video evidence that your claim is untrue and you have no idea what you are talking about.
In the "RealDanODowd" videos he has fitted an aftermarket accessory to the steering wheel to fake having someone touching and the other video "Elon Mode" had someone literally hack the car. Having Elon himself make a publicity video isn't a reflection on how all the cars work.
Anyway, Ford lets you use their system without hands on the wheel, as do the majority of other manufacturers. People just want to hate on Elon because of the "anti woke" crap but there's nothing demonstrably different to any other auto maker.
Way back in 2019 I rode in a Honda Civic that had lane keeping functionality (auto steering and braking) which let you take your hands of the wheel for short periods of time. I imagine the tech was about 1000 times less sophisticated, and in fact, I remember it leaving the lane and needing correcting often, where were the scary and inflammatory articles then?
Let's get real, there's one rule for any news about Tesla, and another rule for other car makers.
I said that Elon does not have his hands on the wheel. You then retorted saying that it will not let you drive without both hands on the wheel, implying that what I said is impossible.
I then provided 3 independent techniques available to Elon Musk that would allow him to drive without his hands on the wheel and pointed at direct video evidence proving your baseless assertion and implication are false.
As to your second point, Ford’s system is explicitly designed, advertised, and declared to be safe to operate hands free with attention being determined through a camera driver monitoring system (DMS). Tesla explicitly declares in the legal fine print that their system should only be used by a attentive driver with their hands on the wheel.
If you think it is unfair to hold Tesla to their own standards for what is safe usage then take that up with Tesla. If they just take liability for hands-free operation like Ford then Tesla can remove that requirement and there would be nothing to complain about if Tesla shows hands-free operation because that would be a legally supported mode. Until they do so they are absolutely guilty of demonstrating unsafe and unsupported operation in official, sanctioned, advertising.
This is why I'm just so disenfranchised with this site anymore. Just another echo chamber of people repeating whatever they think or want to be true to align with their cults of personality.
I don't ride a Telsa or an Elon and I know that's not true. SMH.
Lots of excuses in this thread for the average humans (which is objectively low quality), but not the software that is actively improving. This is why we can’t have nice things.
(have lost friends to both drunk and distracted drivers)
For what it’s worth, at least this is the kind of brain fart screwup a novice driver would actually do. “Hmm, I was looking at the radio, that guy next to me is moving, I can’t quite see the light, it must be green.”
It's funny to think that the cause of the bug could have been this actual mistake by actual humans sneaking through into their training data.
One of the things they talk about in the video is the hoops they had to jump through to get the car to make a complete stop at stop signs, since almost no human drivers actually do that.
Is there any mistake that a self-driving car could make that a novice (or any) human driver couldn't? I fail to see how this line of reasoning is relevant at all.
>You cause an accident by manually driving, you are legally liable.
>You release auto pilot software that causes an accident, you are legally liable.
The human behind the wheel should always be the one liable unless it can be shown that the human attempted to take corrective action and the system for some reason overrode the human or wouldn't allow the correction and "forced" the accident.
As someone who has operated aircraft, watercraft, motorcycles, and various other automobiles - end of the day if I'm the operator it's on ME to make sure it's in proper working order, that I'm familiar with how it works and how to react in an emergency, and to ensure that I'm attentive and operating it in a safe fashion. If I'm using assists it doesn't negate that responsibility.
If you aren't willing to take on that responsibility perhaps you shouldn't be operating the vehicle.
Ive come across this behavior probably once, and I recall it happening in a situation where it “didn’t matter much”.
Other than deliberate situations, no one is passing a red light in a busy intersection like that. Even drunk drivers aren’t going to be passing through that intersection while it’s red.
Everyone alive everywhere is always at risk of death at anytime, your statement has no meaning.
Are you trying to imply that a human driver monitoring a self driving system is somehow increasing the risk to others when its actually decreasing risk?
A human driver 100% focused and ready to take over while an AI drives > a human driver 100% focused on the road driving by themselves >>>> a human driver who should be focused but trusts the AI and will not react in time if it messes up.
The question is what % of drivers will fall under 1 vs 3?
> I'll still take Tesla FSD over the average earthling driver any day
Sure. As long as Tesla itself, the company and its leaders and its shareholders, are held personally liable for any major autopilot or FSD fuckups.
Be it a monetary ticket or time in jail, have them all serve that responsibility each and every time. And if a long stint in "jail" - i.e. not being able the conduct business of any kind for that period - is too traumatic for the company to survive, then maybe they'd rethink their level of recklessness.
And certainly, if I'm in any sort of roadside screwup nearly regardless of fault, my insurance rates are going up a lot, for a long time. So too should that be the case for Tesla and their products whenever and wherever their software makes a decision that is involved in or the cause of the same.
Lol. Sure. I'm absolutely fine with that and everything else it would imply.
The entire issue of excessive CEO compensation for the sake of driving stock price, while being completely divorced from any repercussions of any knowingly harmful actions is exactly the capitalism part of "late stage capitalism" that sucks.
Let's restate your question -- Why should shareholders not get to benefit from blatant disregard for truth or safety without any liability on the line when that stance results in collateral damage?
You should not. I have Tesla FSD and without constant monitoring by a very cautious human driver it is a murder machine. The choice is not “human vs. Tesla” it is “human vs. dangerous Tesla mitigated by humans of varying degrees of competence.”
No you wouldn't. The average human driver is vastly safer than Autopilot. I will agree that autopilot is better than the worst 10% of drivers and maybe on par with the worst 20%. That group causes most accidents and almost all severe accidents. The average human driver drives something like 200,000 miles without any accidents.
edit: Upon looking up some stats I might have misremembered, in any case the median human drivers is definitely vastly safer than Autopilot.
Cars driven with autopilot were at 4.8 million miles between accidents in Q4 2022, by your metric Autopilot is by definition vastly safer than the median human driver
Try to find the number of miles driven or crashes in the report, you know, the numerator and denominator. You will not find it because they seem to think that a “safety report” is a single number. You would fail a 6th grade science fair project if you did that.
And they seem to think it is okay to deliberately publish such a report for two ton machines with software that has killed dozens of people. They are either so stupid they should not be allowed near a safety-critical device or they are being deliberately deceptive to a level that would make Elizabeth Holmes blush.
So no, their safety report is anti-evidence for safety. Only somebody with something to hide would publish such a deficient and deceptive report.
you are missing the point, it requires a human to monitor and take over. So by definition it's still human doing the work. And humans with autopilot are not a random sample of humans, it's a biased sample.
The question is how many. If you encounter 100 cars on your daily commute are >50 of them running red lights? Or is it more like the bottom 1%? Would you be okay if self driving cars on the street fall in that 1%?
Now, if yinz'll excuse me, the driver in the opposite lane going straight has out-of-state plates. So, I need to stop typing this message to peel out / teach them a valuable lesson about right-of-way when the stop light changes in Yinzerville - as I head to a city-government-sanctioned hazard-light parking spot directly in front of Primanti's to pick up my slaw with fries and sandwich garnish (and sugar with coffee and cream).
The question is about average driving. Would you be okay with self driving cars on the streets that drive at the level of videos posted in IdiotsInCars?
And most people would agree that the people in those videos should be getting tickets and worse.
With their software, Tesla is more of an active participant in those ill-fated decisions and mistakes. So they deserve to bear more of the responsibility.
Not defending the buggy FSD in this specific video, but I've seen drivers anticipate a green and jump a bit into a red light. If you are the impatient type who inches forward the entire time you have a red light for no reason, you can easily move into the actual intersection, especially at smaller intersections. I've done it myself from time to time, nobody's perfect.
This is obviously worse because you have to be paying attention and be able to quickly take over, but it's not like this type of thing would never happen with a human driver.
This is not a fair comparison - Tesla is comparing miles driven by their systems, which are inherently limited in scope to "easier" scenarios (only lane assist/only on highways/ect), against total US average, which includes _all_ gnarly road situations.
As per the company, a Tesla with FSD Beta engaged experienced an airbag-deployed crash about every 3.2 million miles in the last 12 months. This makes the system about five times safer than the most recently available US average of 600,000 miles per police-reported crash. It should be noted that FSD Beta specifically works for inner-city streets, which involve tons of edge cases and unpredictable driving behaviors from other vehicles.
AP better be safer than me on average, because anytime it gets a little confused it just squawks loudly and hands control back to me. Then if there's a crash 1 millisecond later they'll chalk that up to the human.
I really find it hard to understand how anyone who has driven a Tesla with AP for any distance thinks it is safer than a human driver. Maybe just drivers who are always texting or drunk. AP is the least defensive driver on the road, it needs close supervision.
“Some” grains of salt? The “safety statistics” should be straight up ignored out of hand.
Do you trust VW if they first party report their emissions? Philip Morris if they first party report the effects of smoking on lung cancer? Unaudited first party reports are literally worthless. They have every incentive to lie or misrepresent.
For that matter, Tesla has repeatedly, actively lied to consumers and misrepresented their products.
The current director of Autopilot software, Ashok Elluswamy, admitted under oath to staging the initial “Paint It Black” Autopilot demo video claiming full self driving capability: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”.
The company misrepresented the range of their cars and actively suppressed complaints and gaslighted customers requesting service [1].
As for the “safety statistics” themselves, they are completely devoid of any information. The only information reported is a single ratio munging both Autopilot and FSD together.
They do not break out the products. They do not enumerate the crashes. They do not even report the number of crashes or number of miles used for calculating their “statistics”. They can not even be bothered to publish the damn numerator or denominator of their safety numbers. This is grade school level reporting and they are being allowed to drive 2 ton machines on public roads.
A 30 page research paper itemizing the specific crashes. Methodology, references, evaluation models, comparisons, etc. That is what a proper safety report looks like, not the grade school bullshit Tesla marketing is pushing. Tesla safety and compliance should be ashamed of themselves for prioritizing marketing over human lives.
If you think a typical random 44 minutes of the average person driving includes an attempted running of a red light, you're out of it. If most people drove like this all the time, the average car driver would already be six feet under. Telsa leans on this sort of extreme misanthropy to pitch their product as a better alternative. Tesla FSD drives worse than a drunk teenager, let alone a normal person on a normal day.
Inb4 the bicycle crowd arrives to tell me that the average car driver commits 50 felonies a minute.
Teslas marketing has been in direct conflict with its capabilities, and hopefully they’ll get smacked down hard enough to stop being reckless with what they claim vs. what they disclaim later on.
It’s called Full Self Driving. That matters, even if they clarify in the fine print.
They've been taking money for cars with hardware capabilities to later become full-self driving, to my understanding. Could consider it taking money to pre-order FSD, but GP's point that it wasn't released yet is still relevant.
All the cars produced have the same “FSD” hardware regardless of options selected.
They are selling a software packaged called “Full Self Driving” that includes some current features, some “beta” features, and promises of additional functionality.
Yes, Tesla has not yet released any non-beta feature that is really “Full Self Driving” but the marketing can be pretty muddled on what the car does today vs what it may do in the future.
Yes, but which option you check doesn't change which hardware you car comes with.
The HW3 situation was Tesla updating the hardware for people who both bought an older car and paid for the FSD "preorder" package, since they later determined that the computer shipped in those older cars was not enough for upcoming functionality, despite saying in 2016 that all of their cars had the hardware necessary for Level 5 Self-Driving.
> Tesla CEO Elon Musk says the new hardware will support Level 5 autonomy, which means the cars can drive themselves completely. However, Tesla will not enable full self-driving immediately. Instead, Musk says the company will roll out features to enhance safety and assist drivers, eventually leading to full autonomy.
So they've been selling a product called "Full Self Driving" since 2016, and also claiming along the way that the current hardware in the car is capable of "Full Self Driving" in the future, pending safety testing and regulatory approval.
They also have a marketing video that says "The person in the driver's seat is only there for legal reasons. The car is driving itself" that has also been up since 2016:
https://www.tesla.com/autopilot
> The HW3 situation was Tesla updating the hardware for people who both bought an older car and paid for the FSD "preorder" package
Also people who had initially opted for just enhanced autopilot on HW2.5 then later decided to upgrade to FSD.
> So they've been selling a product called "Full Self Driving" since 2016
They've been selling hardware capability (hence the HW3 upgrade for those who bought it) for when FSD eventually releases. To me it seems reasonably labelled that it's future functionality and not yet FSD.
Either way, FSD had not released at the time and as far as I can tell the allegations target regular autopilot, not about being mislead into preordering FSD capability.
Tesla has been reckless with their self driving claims and safety for far longer than FSD has been available to the public. The suggestion here is that getting hit for those earlier reckless claims might hopefully get them to be less deceptive going forward.
That won't happen. Tesla is in a complex legal situation where a reaction twoards it also means a reaction towards spaceX and twitter, both of which have parts of the US government happy with.
Do you really think any fine this trial could create would ever be enough? Trials like this tend to spur political action..but my point is clear. Tesla will pay some rounding error fine, the US Gov will do nothing to stop it. This trial cannot possibly generate an actual smackdown without political intervention. This isn't the EU.
It’s also perfectly reasonable for a Tesla to be held accountable for mistakes their AI makes.
A big part of a valuation of the company is that Tesla can continue to flout any rules and do whatever they want, while blaming anyone and anything in the way. Their valuation assumes that services like self driving will drive P&L.
If the company is held accountable, you have to factor in the cost of that. Not good for a stock as highly valued.
>Do you really think any fine this trial could create would ever be enough?
Certainly! American juries have been known to return multi-billion-dollar punitive damages when they're convinced a company has acted maliciously. Look at JNJ, which is on the hook for $9B for selling baby powder contaminated with asbestos.
… Wait, how does this work? Regulator slaps down a company for dodgy advertising, which… somehow has a negative impact on two unrelated companies? It’s (a) not clear to me what the mechanism of action is here and (b) unclear why a regulator should care even if there was one.
Note that even in cases where the company being punished was the company that governments cared about, regulators usually aren’t too shy about this. US and European governments are big Microsoft customers, say, but both US and EU regulators have punished MS for bad behaviour in the past (and, particularly in the EU case, forced it to change aspects of its business practices).
> "There are no self-driving cars on the road today," the company said.
Says the company that makes a product literally called "Full Self-Driving." What a bunch of fucking lowlife charlatans. I surprised they didn't sign it with a smiling poop emoji.
> the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds.
I only use Autopilot on motorways in the UK because I don’t trust it on single carriageways. Even on motorways roadworks can scar the carriageway in such a way that temporary lines are still visible after the works are visible. With the right lighting I definitely see things that would confuse a vision only system. Scary stuff.
> The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled.
Honest question, do you feel you can safely take back the control of the car when driving on a motorway if the "auto pilot" feature take a random decision you cannot predict?
Not the GP but empirically you can't unless you get rid of the majority of FSD's benefit. You'd need to already have both hands on the wheel, have a foot over the pedals, and be paying as much attention as if you were driving - likely more - to see things start to go haywire in that first split second. And, pray that your reaction time is better than whatever FSD is going to try to do.
Maybe that's how you're "supposed" to use FSD but realistically nobody does that. Teslas are not super common in my area but I have one friend who has one with Autopilot and uses it on his ~35 minute highway commute to work. He reads his email on his laptop next to him and doesn't look up at the road until his car gets off the highway. That's how 99% of people will use this.
> I have one friend who has one with Autopilot and uses it on his ~35 minute highway commute to work. He reads his email on his laptop next to him and doesn't look up at the road until his car gets off the highway.
Yes, more. This is a fundamental problem with these sorts of “computer does safety critical thing but can’t do it perfectly” things; you need someone whose job is to do _nothing_ most of the time, but have full awareness of what the machine is doing and be ready to stop it at a moment’s notice. Humans are, generally, quite _bad_ at doing mostly nothing but remaining super-alert.
I don't understand this "[reading] his email" thing. I can't look away from the road for more than a couple of seconds without the car alerting me to pay attention while FSD is enabled. It's currently both watching for attention (eyes) and nagging on the steering wheel. Autopilot at least does the steering wheel nag every 30-60s. Failing to acknowledge these cause the car to pull over and start blaring alarms in the cabin. I'm not sure how "99% of people" can use it the way you're describing, especially for FSD.
I've personally never felt like I couldn't take control in <1s.
I assume your car is newer. I have a 2018 3 and a 2023 X, and the X will nag me almost instantly if it thinks I'm looking at my phone. The 3 however happily chugs along.
Yes, once you apply enough force to the steering it disengages.
Annoyingly, to prove you are in control you must apply slight pressure on the steering wheel. I say annoyingly since it sometimes doesn’t register and you have to give more, which is counterintuitive if it is doing the right thing.
On single carriageways this is worse since you are effectively trying to steer the car into oncoming traffic or off the road to prove that you are watching. That’s one more reason I don’t use it on these types of roads.
All-in, it’s not really worth it tbh, and this is just the basic autopilot.
You can give it the same signal by moving the left volume knob up or down one click. It’s much safer than applying force to the wheel, which I agree is actively unsafe (the way Tesla handles it.)
It’s not just autopilot of even FSD with issues like this, though. I have a 2021 Kia and Toyota that like to aggressively brake at odd times a few times a year. It just happened again in the past week in the Kia. Going 70+MPH and having the brakes suddenly and unexpectedly applied while going around a corner is jarring regardless of how fast you take back control.
My hypothesis for the Kia is that direct sun on corners at speed might affect the vision and/or collision system[0].
Is an infrequent jolt a reasonable trade off for the benefits? As an experienced driver o think so, but would my kids be able to handle it when the learn to drive?
> With the right lighting I definitely see things that would confuse a vision only system. Scary stuff.
Have you used the system in these situations?
There's been a ton of construction on the freeway (motorway) near me that's left several visible lane lines, and the car has always been dead set on the actual lane somehow. It still drives (acceleration, steering) like a drunk 10 year old going to the gas station for a Slurpee, but the actual lane recognition aspect of it has been impressive to me.
There have been a few times I’ve wrestled the car on the motorway for various reasons. I’m not sure what it would have really done, but it was starting to do things I didn’t like so I took over.
> drives (acceleration, steering) like a drunk 10 year old going to the gas station for a Slurpee
Pretty good description from my experience as an owner for 4 years.
It also had so much variability in behavior and regressions.
Sometimes it would be incredible and maintain lane even in confusing poorly marked lane lines.
Other times it would panic alert at every HOV on/off ramp for a 30 mile drive.
Sometimes both of these things in the same drive!
I had to look this up: Motorways in the UK have hard shoulders, and several categories of traffic are not allowed: learners, slow vehicles, pedestrians, and bicycles. They typically do not have traffic lights and the right (innermost) lane is only for passing.
Yes, basically at least one lane either side of you in normal driving unless someone is overtaking, or you are. I like the extra reaction time this gives.
Fuck what it says on the order page. They sell a car promoted as being full self driving capable at some point, yet have loads of different hardware revisions and sensor packages, none of which delivered it and they still have the audacity to charge people for being experimented on now and tell them that it's round the corner.
This is a con. Stop being another damn apologist for people being ripped off.
Is this about Autopilot or Full Self Driving? The article is talking about accidents in 2019. But then it says:
> Self-driving capability is central to Tesla’s financial future, according to Musk, whose own reputation as an engineering leader is being challenged with allegations by plaintiffs in one of two lawsuits that he personally leads the group behind technology that failed. Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.
The $15k software is Full Self Driving, which is still in beta and has very clear customer warnings and consent screens before using.
But the initial versions of this beta software wasn't even released for consumers until October 2020, a year after the crashes.
The internet hive mind is the PR team. They’ll flood the zone with FUD better than any PR group because they aren’t accountable.
It’s almost as bad as gun people arguing about gun minutiae. Was it autopilot? enhanced autopilot? Vaporware FSD that was sold but people didn’t realize does not exist? Real FSD that is in “beta”?
Just like how reporting of some dude shooting up a nursery school will get derailed about mansplaining semi-automatic vs burst fire, anything Tesla must dissemble through the various bullshit.
Both accidents happened prior to the release of FSD so were using Enhanced Autopilot. However, in 2019 Tesla had stopped selling the EA add-on and only sold FSD. They started selling EA again in 2022.
The claimants here could well have bought a FSD package even though they only got access to EA features. Either way, a loss here that shows Tesla knowingly lied about safety features or negligently didn't fix known problems would probably hurt FSD sales, even of the specific judgement only relates to EA.
> The $15k software is Full Self Driving, which is still in beta and has very clear customer warnings and consent screens before using.
The price makes this so much worse. If it were free and plastered in warnings, people probably wouldn't trust it. But because people first pay $15k for it, they now have 15 thousand reasons to want it to work and want to trust it. Paying for it creates a strong cognitive bias for disregarding any warning.
First buyers I believe got autopilot for free in some versions (I know for sure the X owners did) as well as unlimited charging. Still even those who paid for it, $15K isn't a fortune for everyone, particularly those throwing $80-100K on a car.
AP is not FSD, and aside from some early years it has been a standard no-extra-charge option on every Tesla produced.
And 15K may not be a fortune for some people, but wealthy people don't get that way by being frivolous with their money and a 15-35% bump in the cost of a car is definitely going to be noticed.
I hope Elon is personally held responsible for this death.
It is crystal clear that Autopilot, Full Self Driving, and other marketing terms are deceptive and dangerous, yet led countless people to spend five figures each enriching the now wealthiest person.
Musk-aside, I really want someone looking into what happened with the regulators. Yes, plural. From the FTC for allowing the marketing claims, to the NHTSA for allowing it on roads, plus whoever I may be missing.
There has to have been some financial incentive for key regulatory personnel to look the other way so frequently. I don't mean Elon dropping off bags of cash, I mean something as simple as them having Tesla shares, but not being forced to disclose this or recuse themselves from any decision or case relating to Tesla.
I think the following excerpts make a nice juxtaposition:
> the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds. The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled.
> Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.
A new car built by my company leaves somewhere traveling at 60 mph. The FSD confuses a firetruck for an empty lane and drive full-speed into the back of it. The car crashes and burns with everyone trapped inside. Now, should we stop taking money for FSD? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the windfall we get for selling people something called "Full Self-Driving" that isn’t actually self-driving, we keep taking money for it.
I don't dispute the calculus there, but that's not quite what I read in the passage from the article.
In this case it's the consumer that's made to look ridiculous:-
You can buy the car and get wrapped around a tree. But if the courts don't think Musk should pay you damages for that, then somehow you're more likely to buy the car.
> the company has argued that Lee consumed alcohol before getting behind the wheel and that it is not clear whether Autopilot was on at the time of crash.
Given that this case hasn’t been previously reported, I don’t think anyone can claim to know that Tesla’s version of the story is false. Presumably if they’re confident in it, Tesla won’t agree to a last minute settlement, and we’ll see what evidence comes out in the trial.
> Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.
If anything I would expect the opposite; if the manufacturer were found liable in a few of these cases but continued to sell the thing, that would make it a more attractive product (Mercedes has a limited self-driving thing where they take some responsibility up-front, say).
Until liability is on the manufacturer, I'm really not interested in self-driving cars.
Currently, when you use "self-driving", you need to pay more attention not less, because the system adds an additional unpredictability calculation to your driving, that you wouldn't have if you were just driving in the first place.
Don't get me wrong, I'd love to have a car I could tell to go the shop and get serviced. Or come to the airport to pick me up. Do I think that could happen? Maybe sooner with infrastructure support (but that'll be expensive), so realistically I'm pessimistic.
At this point we have a car company that puts out poor fit and finish quality vehicles, with terrible customer service, maximum data collection, and now arguably lethal $10k upgrades?
Why is this the future? Can we make it not the future? If the benefit of switching to electric is that high and the bar seemingly low (given the above), then shouldn’t we be seeing way more fully electric vehicle companies and models?
Something I’m not seeing yet in these comments is a discussion of the other automakers’ implementation of the same features.
Nearly every other luxury automaker has an automatic steering and lane change feature as well.
I think that other automakers will be hit with similar lawsuits regardless of what the driver assist system is called, because it will be easy to argue that the driver assistance systems assume some percentage of responsibility while active, and because you don’t have to win a lawsuit to reach a favorable settlement.
Tesla may pay out more than other automakers due to its optimistic marketing.
> Something I’m not seeing yet in these comments is a discussion of the other automakers’ implementation of the same features.
Because these threads always become Tesla hate instead of an objective analysis.
All of these lane centering ADAS (including Autopilot) will kill you if you leave them unattended long enough (usually by taking a tight off-ramp at an unsafe speed). These is very evident to people who actually use these systems.
And there's nothing wrong with such behavior, it's an L2 system. Cruise Control will also kill you. These systems have been on the road for a long while and people aren't dying left and right because of it, they are empirically reasonably safe. They're probably a net positive as they likely drastically reduce rear ends.
People just get riled up at one off accidents because they make the news.
I hope Tesla pays lots in damages and then keeps improving their systems. If they’re truly below the rate of human error when segmented by scenario there’s no reason for a ban.
I also wish they’d knock it off with purposefully misleading marketing. They have a decent product but for whatever reason their CEO feels like compulsively lying about it.
It will just not happen as long as self-driving cars and humans share the same roads. AI will never be better at avoiding a playing child who accidentally stepped on the road as an average human driver.
If we want self-driving cars to become a reality, we should instead make separate roads, and let the drive there. Some analogue to railroads, that are used exclusively by trains.
> AI will never be better at avoiding a playing child who accidentally stepped on the road as an average human driver.
What makes you so confident?
I'd argue for the reverse-- there is absolutely no way that humans are able to compete long term, because human reaction time and focus/attention is inherently limited-- as soon as computer vision starts to outperform, humans will NEVER become competitively safe drivers again (could be 5 years, could be 50, who knows, but this seems EXCLUSIVELY a matter of time, especially considering the progress we made in the last decades).
You misunderstood my point. I'm not saying that humans are always "better", just that humans are better in mixed environment. In controlled roads without humans, AI can be clearly better even today.
I don't see how the mixedness of the environment changes anything; sure, a mixed-driver environment is more difficult to navigate because traffic participants are less predictable, but this effectively just delayes the point slightly at which AI will simply outperform even the best humans, right?
> AI will never be better at avoiding a playing child who accidentally stepped on the road as an average human driver.
Very hard to predict what will be possible with AI in 10 years time, but I wouldn’t bet on humans here.
Humans have a number of disadvantages compared to an AI agent in this scenario - they look at phones, get tired, technically have slower response times…
And then AI has seen tremendous advancement in the last 3 years with it basically smashing a load of tasks that were thought to be entirely ‘non-automatable’ by many people.
Plus think about what has been achieved in the self driving space in the last 10 years and then imagine what can be done in the next 20. It’s probably not a 1-2 year thing (imo), it’s probably a 10-20 year thing (but again totally unpredictable! Maybe 5 maybe 50!).
And self-driving without operating cooperating alongside human drivers just won’t happen from an infrastructure perspective.
I'm not sure you have to wait a few years. There is a certain something about saying self driving cars are future dream when they are working for real 24/7 in SF.
Never say never, but I do agree that developing self driving AI by teaching it to mimic human drivers on regular roads vs building a fully closed system and slowing expanding it was a mistake.
Had the problem been tackled more intelligently we could have had closed off "self driving lanes" or entire thoroughfares in major cities with advanced sensors built-in and no interaction with humans at all. Considering how many hundreds of billions of dollars have been invested into the problem, we would definitely have had more to show for it by now.
If you have deep enough pockets and you're worried the verdict is going to be very expensive and probably not in your favor, there are lots of ways to stall the forward progress of the trial.
I'm not a lawyer and not well versed on the technical legal details of this, but it feels like if Tesla doesn't face some kind of consequences for marketing Full Self Driving but then saying "oh actually it's not if something bad happens" is so shameful and so bullshitty i'll be very discouraged
This may be a slightly weird thing to say and maybe I wont put it very eloquently but I'm somewhat of a believer in taking risks that can endanger lives now if it means doing something that is a net positive for everyone later. I'm not great historian but I think the aviation industry is a good example of this. Basically every plane crash that resulted in passengers dying has led to changes, and very often improvements, in air safety. Whether that be a change in the mechanical, software, or human systems that govern aviation. Globally these things are taken quite seriously because everyone understands how important it is. So it's odd to me to see such cavalier attitudes taken when it comes to self driving cars. Why not take safety more seriously? It will earn the industry a ton of trust and it could save lives.
Air travel and sea travel did not get safer because some companies got together and decided to spend a little money to advance society. Ships sank and planes crashed frequently.
Things got safer because governments took action. Look at the 737 MAX. The cause of the crashes wouldn't have been investigated and Boeing wouldn't have fixed their plane if governments hadn't taken action.
At scale, companies do not value human life, so regulations add costs to change the incentives. Otherwise human life is just another tragedy of the commons.
I don't think that's a totally fair characterization of the history of air safety but I do mostly agree with your point. Government has to take action and right now they're just permitting anything... or that's what it looks like.
> The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled. The lawsuit, filed against Tesla by the passengers and Lee's estate, accuses Tesla of knowing that Autopilot and other safety systems were defective when it sold the car.
Do we know anything about the likelihood of criminal charges over that crash?
As someone who has FSD Beta, the improvements lately have been amazing. Driving my other non-FSD car is getting more and more annoying.
If you extrapolate out the progress to around 5 years I see a world that has car insurance companies giving you a huge discount to use self driving. Maybe insurance companies will have a lower deductible while having a crash during self driving vs a high deductible during manual driving.
I see FSD getting so good in 10 years the government would mandate it because it would eliminate a good portion of the 40k traffic deaths a year.
Tesla self driving is only level 2 and they explicitly say that the driver must be fully attentive and have hands on the wheel at all times. So no matter how the courts rule here, it'll be about wording at best. A much more interesting case will come once Mercedes produces its inevitable first fatality with their level 3 system. Rather unlikely for now due to lots of factors (only works up to 40km/h, only on certain roads and much fewer cars sold), but this will set the stage for how far manufacturer liability actually goes for self driving.
> So no matter how the courts rule here, it'll be about wording at best.
That describes just about every civil court case in the world. The law is all about interpreting the meaning of words. Whether Tesla misled consumers into thinking their car could do more than it was capable of is a very important ruling, and will set a strong precedent for the entire self driving industry.
The excuse is that this firmware version is not available to the public. Yet Elon is driving in public with other drivers putting everyone at risk of death. At what point are we going to stop this lunacy?