Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I've heard out of Elon and engineers on the team is that some of these variations of sensors create ambiguity, especially around faults. So if you have a camera and a radar sensor and they're providing conflicting information, it's much harder to tell which is correct compared to just having redundant camera sensors.

I will also add in my personal experience, while some filters work best together (like imu/gnss), we usually either used lidar or camera, not both. Part of the reason was combining them started requiring a lot more overhead and cross-sensor experts, and it took away from the actual problems we were trying to solve. While I suppose one could argue this is a cost issue (just hire more engineers!) I do think there's value in simplifying your tech stack whenever possible. The fewer independent parts you have the faster you can move and the more people can become an expert on one thing

Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.





Counterpoint: Waymo

> solid engineering defense for moving towards just computer vision

COUNTERPOINT: WAYMO


From my previous comment, in case you didn't see it

> Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.


We don't know enough about the internals of either one of them to make a judgement. The only one that is, is Comma.ai.

Waymo just reached 20 million public unsupervised rides. When will it be validated enough for Tesla fanboys? (Answer: never)

Fanboy or not, we don't know how much Waymo's model relies on an army of contractors labeling every stop light, sign, and trash can that, sure, they're using LIDAR to detect them and not cameras. We also don't know much about Tesla's Robotaxi initiative and how much human help they're relying on either.

first of all, their approach does not rely on map level labeling for runtime. They do that for training, but so does every other player. High precision maps are used as a form of GPS in their solution. They also use it to determine delta if road conditions change and alert ops.

Second of all, they’re using cameras to detect things like signs and trash cans! I don’t know where this misconception came from that Waymo only uses lidar, but it demonstrates a lack of familiarity with the problem space and the solutions


Your logic is correct, however these challenges can be solved and then you get synergy effects from using different sensors.

I don’t understand how running into difficulties when trying to solve a problem can be interpreted as “[taking] away from the actual problem”.

In our case if we're spending a lot of time on something that doesn't improve the product, it just takes away from the product. Like if we put 800 engineering hours into sensor fusion and lidar when the end product doesn't become materially better, we could have placed those 800 hours towards something else which makes the end product better.

It's not that we ran into problems, it's that the tech didn't deliver what we hoped when we could have used the time to build something better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: