Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My personal hunch is that fully autonomous self-driving tech is not theoretically possible under currently known computational models, because it implies many well-known NP-Hard problems. Self-driving companies are betting on the ability to find a heuristic/approximation that works "sufficiently well". But I strongly feel that the chasm that needs to be crossed to be "sufficiently good" is not one of magnitude (i.e. we just need more testing!), but of theoretical boundaries, due to the existence of at-least two sub-problems which are not computationally solvable: 1. prediction of what pedestrians/cyclists will do next, and 2. accounting for sensor input distortion under bad weather conditions.

Humans can solve these problems due to life experience, not just driving experience. In other words, I think we're gonna need fully-conscious AI to solve self-driving.

The only way self-driving tech will reach production is if the input space is restricted, which is a significant-but-not-groundbreaking iteration on what we've been doing for decades with airplane autopilots and self-driving monorails. Sure, we can have self-driving cars on specifically designed freeways, but nothing more.



Do you believe human-driven tech is theoretically possible?

Current experiments and production data from human controlled vehicles have not been encouraging.


I added an edit to my comment: We don't have a computational/theoretical model for human consciousness. This is why it's called The Hard Problem of Consciousness.


That isn't the hard problem. That is simply having a theory of mind. The hard problem is understanding how the physical world gives rise to subjective experience (what causes conscious beings not to be philosophical zombies).


Oops, you're right! I've edited my original comment, thanks.


This is not a really strong argument against self-driving cars. The fact that a problem is NP-hard doesn't make it untractable. Every day we use apps that deal with NP problems (e.g., routing problems, packing problems, etc.). Also, note that there're P problems whose instances can be harder than (smaller) NP ones.


That's basically what human driving is though, no? We consciously and unconsciously take our attention off the task all the time. It's not possible to drive fully alert of surroundings all the time and at least part of the time we are simply dead reckoning in fairly safe lanes of travel at constant speed and direction with minimal pedestrian and cross traffic. I could call it "controlled chaos", "luck", "planning" but there is some amount of unknown when moving a multi-thousand pound object around, and as speed increases so do the chances, ability to optimally correct, and severity of mistakes. It is very interesting and challenging to map the morality of the decisions of moving these machines onto automation.


There's problem classes and problem instances.

What does NP-hardness look like for self-driving car tech? non-deterministic polynomial in: number of objects? number of lanes? time steps in the planning horizon? action/observation branching factor? These things are bounded in practice.

Not saying that the computational problems aren't hard. But ending the conversation at "NP Hard" throws away too many nuances.


NP in the number of data points received from all its sensors.


Aren't self-driving cars already doing much better than humans on the safety front? That's the standard to beat, not perfection.


1. I seriously doubt self-driving cars will be viable in irregular roads

2. Who do you blame when a faulty algorithm eventually kills a person? Who do you get mad at? When a drunk driver kills your family member, you can go to court and look at their face and look at the faces of their family members. When that happens with a self-driving car you'll just be looking at corporate lawyers who will shrug their shoulders at you and say "lol sorry our dumb machine killed your daughter".


2. Why do you need hatred? Why not appreciate that every self-driving death would be used to improve safety for everyone else, just like what happens with plane crashes, building collapses, etc. Also, you can't hate anyone if you kill yourself by accident. What if your loved one kills themself by breaking a road rule?


I think the self-driving initiative is not about beating the human counterpart at launch, but at least on par with the performance done by human. That itself has huge value in efficiency and saving time.

Slowly with many AI-driven vehicles on the road, we can "optimized" for a better performance on safety and other issues.


What if "good enough" is just reduces deaths/injuries by 99%? 90%?

People would still get hurt and die directly due to software that cannot be perfected, but the gains to society as a whole might be worth it.


> Sure, we can have self-driving cars on specifically designed freeways, but nothing more.

But at this point what makes it better than a train?


I'm not necessarily arguing for this, but one viewpoint might be: A vehicle that can autodrive the freeway (i.e. most of the way), and reverts to manual control for the shorter sections before and after the freeway, would have a significant convenience benefit over a train.

Take the train, and you have no vehicle with which to go from the destination station to your endpoint. Also, you could travel at any time instead of being held to the schedule.


Well, fair enough, I suppose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: