Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can we please not overreact and send the whole self-driving research down the drain?

What does the statistics say? How many miles have the self-driving cars driven, and how many deaths were they responsible for? How does it compare to a human driver?

When a million humans drive a car for a mile, and 1 of them results in a death, it's easy to pinpoint the blame on a "random drunk/distracted driver". How'bout start thinking of the self-driving software as a combination of all human drivers, with just a much smaller odds of having a drunk/sun-blinded/distracted driver than an average human driver.



> What does the statistics say? How many miles have the self-driving cars driven, and how many deaths were they responsible for? How does it compare to a human driver?

As of December 2017 Uber had driven 2 million autonomous miles[1]. Let's be generous and double that, so 4 million.

The NHTSA reports a fatality rate (including pedestrians, cyclists, and drivers) of 1.25 deaths per 100 million miles[2], twenty five times the distance Uber has driven.

You probably shouldn't extrapolate or infer anything between those two statistics, they're pretty meaningless because we don't have nearly enough data on self driving cars. But since you asked the question, that's the benchmark: 1.25 deaths per 100 million miles.

[1]: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self... [2]: https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


Scaling those numbers paints a poor picture for Uber. Assuming 3 million total miles autonomously driven thus far from Uber's program:

- Uber autonomous: 33.3 deaths per 100 million miles

- Waymo: 0 deaths per 100 million miles

- National average: 1.25 deaths per 100 million miles

Of course, the Uber and Waymo numbers are from a small sample size.

But there's also the bayesian prior that Uber has been grossly negligent and reckless in other aspects of their business, in addition to reports that their self-driving cars have had tons of other blatant issues like running red lights.

It seems reasonably possible that an Uber self-driving car is about as safe as a drunk driver. DUIs send people to jail - what's the punishment for Uber?


Scaling those numbers is not useful and in fact reduces usefulness.

Comically, that’s why OP said not to do that.

Comparing dissimilar things is actually worse than not comparing at all since it will increase the likelihood of some decision resulting from the false comparison.


The goal is to use the best set of information available to us. I merely cited the normalized numbers because it's been asked various times in this thread - questions along the lines of "how does this rate compare with human drivers?"

The purpose of the extrapolation was to get a (flawed) approximation to that answer. By itself, it doesn't say much, but all we can do is parse the data points available to us:

- Uber's death rate after approximately 3 million self-driven miles is significantly higher than the national average, and probably comparable to drunk drivers.

- Public reporting around the Uber's self-driving program suggests a myriad of egregious issues - such as running red lights.

- The company has not obeyed self-driving regulations in the past, in part because they were unwilling to report "disengagements" to the public record.

- The company has a history of an outlier level of negligence and recklessness in other areas - for example, sexual harassment.


But this is precisely why you should simply extrapolate. Of course people ask, and of course the answer will be useful. But extrapolating one figure of 3M miles to a typical measure (per 100M) is not useful because it provides no actionable information.

Providing this likely wrong number anchors a value in people’s minds.

It’s actually worse than saying “we don’t know the rate compared to human drivers because there’s not enough miles driven.”

Your other points are valid but don’t excuse poor data methods hygiene.

Even now you are making baseless data on its face because you don’t know the human fatality rate per 3M enough to say is “significantly higher.” Although I think it’s easier to find enough data from the human driver data to match similar samples to Uber. But dividing by 33 is not sufficient to support your statement.

I haven’t seen data on the public reporting. That seems interesting and would appreciate it if you can link to it.


> the self-driving car was, in fact, driving itself when it barreled through the red light, according to two Uber employees, who spoke on the condition of anonymity because they signed nondisclosure agreements with the company, and internal Uber documents viewed by The New York Times. All told, the mapping programs used by Uber’s cars failed to recognize six traffic lights in the San Francisco area. “In this case, the car went through a red light,” the documents said.

https://www.nytimes.com/2017/02/24/technology/anthony-levand...


It depends on what question you're trying to answer with the data (however incomplete one might view it).

Is the data sufficient to say if Uber might eventually arrive at a usable self driving vehicle. Plainly no. It's not sufficient to answer this question one way or another.

Is the data sufficient to indicate if Uber is responsible enough to operate an automated test vehicle program on public roads. Maybe.

There still needs to be an investigation of cause, but if the cause is in a autopilot failure, or the testing protocols preventing a failing autopilot from harming the public, then the question is what the remedy should be.


I agree there should be a investigation.

I agree that you have to use data available to make the best decision possible.

There may be methods to account with all of the problems of comparing two different measures, but it requires a lot of explanation.

But extrapolating one measure into another is wrong without those caveats. That’s the comment I replied to. So in no situation would the method I replied to be useful for what reasonable question is asked.


I think it's very relevant. If the testing protocols are insufficient to prevent an avoidable accident within an outer bound of accident rates. If it is a clear data point outside those bounds (even with uncertainty) one could make a case to severely limit or ban Uber's testing on public roads, and require that they demonstrate sufficient maturity of testing procedures and data to be allowed back onto the roads. This as opposed for waiting for another 'data point' (death).


We absolutely should extrapolate something from those statistics.

Let's assume that the chance of killing in any two intervals of of the same number of miles traveled is the same. Let's say that the threshold for self driving cars being "good enough" is the same death rate as human drivers.

If we assume Uber is good enough, then they should kill people at a rate of at most 1.25/100,000,000. The waiting time until they first kill someone should fit an exponential distribution. The probability that a death would occur in the first t miles is 1 - e^(-lambda t) where lambda is the rate of killing people, is 1.25/100,000,000. I.e. 1 - e^( -(1.25 / 100,000,000) x 4), which is 5 x 10^-8.

If Uber has only a 5 x 10^-8 probability of driving "safely enough" they should lose their license at the very least.

Edit: Oops, 4 != 4,000,000. It should be 1 - e^( -(1.25 / 100,000,000) x 4,000,000) which is about 0.049...

Still, I think we can ask for better than a 5% chance of being the same as a human driver.

(also replaced stars with 'x' because HN was making things italic)


Also, probably 95 percent of the autonomous miles are under the easiest conditions, a sunny day between 9-5 because most are at the Arizona/California test centers.


1.25 per 100 million miles is almost certainly a bad benchmark since the majority of those miles are interstate miles. Fatality rate per mile of urban driving would be much better, although I'm not really sure whether I would expect that number to be higher or lower.

Edit: Actually, maybe I'm wrong in assuming (a) the majority of miles driven are interstate miles, or (b) that the majority of existing miles logged by self-driving cars have not been on the interstate. Would love to see some data if anyone has it, although I suspect Google, Uber, et al. are reluctant to share data at this point.


If the accident had happened with the autonomous vehicle of ANY company, we would still be talking about this and estimating the number of deaths per 100 million miles.

Therefore, I think it would be more fair to consider all miles run by all autonomous vehicles all over the world in the denominator.

It is for the same reason that we want to consider all miles driven everywhere, not just those in Arizona.


> As of December 2017 Uber had driven 2 million autonomous miles[1]. Let's be generous and double that, so 4 million.

How often did the human have to take control?


Note that the statistics we have to work with are relatively terrible. For example: Waymo's favorite statistic is "x miles driven", which is a terrible/useless statistic, because it treats all miles equally, and fails to note that the most complex driving is often short distances (intersections, merges, etc.) and doesn't account for the fact that most of those miles were repeatedly driving on a very small number of roads. But it looks good on marketing copy because it's a big number.

Additionally, our self-driving car statistics we tend to see today also tries to ignore the presence of test drivers and how frequently they intervene. As long as they can intervene, the safety record of self-driving cars is being inflated by the fact that there's a second decisionmaker in the vehicle.

EDIT: Oh, and human driving statistics are also bad because a lot of accidents don't even get reported, and then when they do get reported, a lot of it is through different insurance companies, that's before we get into nobody centrally tracking "miles driven", and that's why most statistics for human driving safety are more or less just an educated guess.


I think it’s fine to attribute miles to autonomous vehicles which have drivers that can intervene... as long as that is how they are used outside of tests as well.

Just a guess, but I doubt having a test driver that can intervene will help safety statistics for autonomous vehicles much. I think we’ll find test drivers will usually be unable to notice and react fast enough when they are needed.


My biggest concern is that test drivers can and have done the 'hard parts' of a lot of test routes, making the overall miles driven statistic kind of useless as a representation of the driving ability of the car.

But yeah, I'd agree there's a lot of difficulties expecting a test driver to immediately take over in a sudden event like a bike entering the roadway.


I agree the statistics we have are pretty terrible. However, in this case, I think Waymo's statistic is actually quite useful. It's likely that Waymo's x miles driven statistic is largely driven by the fact that Waymo has tested their cars on a small number of roads, in fairly safe settings. But that paints Uber in an even worse light. Waymo is supposedly ahead, or at least on par with Uber in self driving technology, and they have chosen to limit their testing and driving to safer and a limited number of roads. Uber has not. That seems to underscore the fact that Uber has pushed beyond their tech's capabilities even though their competitors have determined the tech isn't there yet.

Also, if someone had posted a poll a day ago as to which company's self driving cars were likely to be the first to kill somebody, I think the vast majority of people would have predicted Uber. I don't think that's a coincidence.


Note that Waymo was the loudest about how they shouldn't be forced to have a steering wheel in their cars, already, and that they're also massively ramping up in Arizona, because of the nonexistent regulations. In Arizona, Waymo won't be forced to disclose statistics on disengagements, for example, which California does require they hand over.


Waymo is already using autonomous vehicles in AZ without a safety driver behind the wheel. There are fully autonomous Waymo vehicles driving around in parts of AZ.


Which is why I am only a little bit surprised Uber beat Waymo to killing a pedestrian. Waymo is way too arrogant about it's capabilities, moving way faster than is reasonable or safe, and they use misleading statistics to insinuate their system is more capable than it is.

Note that they already know their cars will need help driving still, which is why they've assembled a call center for remote drivers to take over their cars in Arizona. Of course, those remote drivers likely can't intervene at nearly the speed of an onboard safety driver.


We don't really have any statistics on autonomous car deaths yet. One fatality is only one data point; that's not _nearly_ enough information to come to any solid conclusion on the overall safety of the technology. (Not to mention the fact that a failure of any one particular implementation of self-driving car tech doesn't necessarily mean the other implementations are similarly unsafe.)


You know how sometimes you get a gut feeling or your awareness picks up somehow?

You experience it sometimes where you just happen to glance in the right direction, as a driver, and avoid horrible things from happening. Or hesitate to go through an intersection when a light turns green and some person is running a red. You and your body somehow knew but it can't be explained.

Computers don't have that, whatever it is.


The thing you're describing is coincidence/confirmation bias, not a real phenomenon.


I completely agree with you, but also have an anecdote that argues for the parent's view.

I was driving a long roadtrip from Texas to California, split into a couple segments over several days. At one point, my adrenaline and heart rate suddenly spiked. I felt freaked out but could not see a reason for it. I checked all my mirrors, traffic was busy but seemed to be moving along normally. A few moments later my vehicle was rocked by a semi-truck blowing past, traveling much faster than surrounding traffic, and missing my vehicle by what seemed like an inch.

The roadway was curved slightly, so I think the semi was in a blind spot when I was actively searching for the problem.

It's interesting that a subconscious process could alert me to a problem, in this case it didn't help me resolve it, but at least I was alert and looking. It had never happened before so there was a bit of confusion as well (why am I suddenly freaking out?) - but now I know how to pay attention if that feeling happens again.


Except that, in your example, an autonomous vehicle would already have tracked the truck (they can track nearly a mile in 360 degrees) and would have no need for that panic response. The OP is kinda pointless.


Maybe you haven't had it happen then to know what I'm talking about. Never had a gut instinct or a bad feeling that becomes true?

Sure maybe it happens and there is nothing really going on but my point is there is something about our sub-conscious that cannot be implemented with computers.


Part of what he's describing is coincidence/confirmation bias, but another part of it is probably some sort of cognition that happens at a level of the mind the driver is not conscious of.


The statistics may prove meaningless. It's the emotional reaction of our legislators. Take the recent legislation regarding a single killed dog by United Airlines (my condolences to the family members/owners of that dog) vs. deaths via [insert any other thing that has multiple deaths and is currently under regulated].


No, the time to overact is now, before millions of these get on the road.

Actually, the time to "overeact" was even earlier, but most of self-driving cars ignored any criticism. So now you have stupid car companies cutting corners and killing people so they can be "first" on the market or whatever.


What if you already thought from the beginning that cars, including human-driven ones, and their infrastructure, were always a huge boondoggle (the grossest misallocation of resources in human history, as Jim Kunstler calls it), and an instrument that posits itself as the solution to a problem it caused (not unlike an addictive drug), and that meanwhile serves chiefly to centralize wealth and rend the social fabric?

Please petulantly downvote all minority opinions!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: