Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Like the general trolly problem, this formulation of the self-driving car problem assumes some agents are passive. The trolly problem assumes nobody is trying to rescue the potential victims. This formulation of the self-driving car problem assumes the school bus and other vehicles are passive.

Thus the conventional trolly problem doesn't really do the problem of self-driving car decision trees justice. The school bus isn't a dumb trolly. When your car is heading toward the school bus, the school bus will also running down it's decision tree and the best outcome based on its 15 passengers may be to take you out directly rather than run the risk of miscalculating what will happen if your car dives into the guardrail. Which is to say, that statistical predictions bring confidence intervals into play.

I believe this is why trolly problems in general reveal more about their formulation than about our ethical reasoning. We will throw a switch to shunt the trolly because the bad outcome will remain in the future and the possibility of changing circumstances remains very real. We know from experience that any of our predictions may be fallible and that the more temporally remote the event the more our prediction is fallible. Pushing the fat man off the bridge elicits a different reaction: the outcome is immediate and our prediction less fallible.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: