Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The intermediary step (computer driving with human failsafe) is also incredibly challenging, because you are unsure of the computer's knowledge and intent (this is also what makes backseat drivers so annoying).

This computer drives much more conservatively than a human driver, so much so that I'm not sure a human failsafe driver would grok what is going on.

Waiting for a railway crossing to be completely clear before proceeding is not something most drivers do (ehem, especially at that particular intersection), and may seem like odd behavior to a failsafe driver behind the wheel. Similarly, waiting for a bicyclist coming from behind may also seem odd. For one, he may not have that information since he is less attentive in 'failsafe' mode... or many human drivers would see the cyclist and proceed a bit more quickly to make the turn to get out of traffic and free the intersection for the oncoming cyclist (and cars behind him).

On the other hand I would have probably pulled the plug on the computer driver when Mr. Indecisive-Cyclist was in the road. The computer handled it fine, but as the failsafe, am I sure that the computer will handle it well?

This intermediary stage is going to get really weird, I don't even know how you'd manage liability in this world.



I'd actually go further. An autonomous driving system that works for routine driving pretty much has to be designed on the basis that the human "failsafe" will NOT be paying attention and will not be prepared to take over on short notice. Heck, enough people don't pay close attention to their driving today without self-driving cars.

The most obvious intermediate stage is designated sections of highways in which self-driving cars can operate without active human drivers. The question though is whether that's an interesting enough use case to push through all the legal/regulatory/etc. changes that would be required.


I think the fail safe human laws are only useful to license research.

For day to day driving, the cars should either be fully autonomous or have the automated systems limited to intervening in dangerous situations (like current brake priming systems and the like). If the driver is able to take (primary) control, they should be required to be in control.


Tricky. There may be situations where the car stops and says "I can't do this" at which point primary control is handed over to the driver. Think about the system being damaged or obstructed, or a protest rally staring in the street up ahead. The driver should be able to take over, maybe after a stop.

Given this "stop, your turn" option, IMHO it would be horrible to also deny the driver from taking over during normal driving conditions, seeing as though computers will certainly make mistakes. Allowing that option to exist, however, should not equate to the human 'driver' being liable for not taking control when the computer failed.

Basically, you should give passengers an emergency stop option and not make them liable for not using it.


My thinking about this is based on humans being very bad at handling situations where they don't really need to pay attention. I think if the system is only good enough that an attentive driver/passenger can use it well, then it needs to be made better before it is licensed for everyday use.

The stop to change control thing makes sense to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: