> At the current state of my model the model basically just clones the human driver as good as possible. That means the the amount of brake is higher in curves
I read in another comment that you are still in high school, so maybe the above is because you do not have actual driving experience. But this is not how human drivers drive.
Human drivers brake before the curves, while usually accelerate during curves. This improves stability.
This is something you may want to consider for your next iterations. In any case, congratulations for your impressive work!
While that is how you should do it most of the time (depending on how the radius, gradient and your visual range changes as you progress around the curve), my observation of brake lights suggests that this is rarely practiced in the USA.
The amount of braking that can be seen on multi-lane divided highways is far higher than would be necessary if drivers were paying due attention - or not resting their left foot on the brake pedal, as the case may be.
You are conflating effective driving habits with actual driving habits. I'd say less than 5% of drivers in the states properly brake before curves and accelerate through. Hell, I'd be surprised if the model trained after human behavior DIDN'T leave it's lane when navigating a turn.
good start op. a good next step is add in temporal context (previous frame information) so model may resolve ambiguous cases. see e.g. breakdown of comma.ai openpilot model here (from their twitter, [0]), or the karpathy talk as well.
also, when presenting results, prefer to include a longer demo or sub-demos that show strengths and failure cases, and move to the top of the post rather than bottom imo. for a given reader, implementation details are either confusing and uninteresting (if not subject-matter expert) or predictable and uninteresting (if an expert, you've seen many similar before); audience for implementation details is very small number of people who want to sit down and replicate or check your work. but demos / analysis is always novel and interesting to anyone, so lead with that :)
That medium link is awesome. I've been trying to find more links that dive into the innerworks of comma ai's openpilot. Do you have any more similar articles that you can recommend?
for specifics, would recommend to simply read the code. for overall context, geohot has done interviews ([0], [1]) and that give overview of comma/openpilot at high level.
No this is the only article about the comma ai model. Many people including me,in the discord server tried to extract the weights from the snpe and carry them over to a keras model
Seems really scary to exclusively use a neural network for safety-critical tasks like this, without having an explicit method guaranteeing safety.
You can't prove that a trained neural network is always correct, and thus this is likely going to kill someone at some point.
I think you definitely need a LIDAR (or in general something that can give an accurate 3D map of all surroundings) and some explicitly written code that can be proven to result in never hitting a car going the opposite direction in the opposite lane (provided they stay in their lane), as well as never hitting stationary obstacles and never going off the side of a mountainside road.
My cat's collision avoidance can be both outstanding and abysmal. Context: she's running a purely vision based system at the moment (supported by elastic proximity sensors at the snout and excellent remote sensing for both chemical and acoustic stimuli) and she still bumps against my leg several times a day. And not just once but repeatedly. Something might be bugged here as she's not having issues with any other static and most dynamic objects either. I'm not sure about the details but since she's not getting any explicit upgrades to her high resolution 3D map, so she might using an advanced SLAM system. The latter is supported by observations that she has no troubles moving in previously unseen environments like while being at the vet.
Acceleration is good, agility as well, still sure-footed too, so her movement predictions and the execution seem to match well. So the only complaint is the frequent bumps against my leg. Maybe her system is misclassifying something as she's not having problems otherwise. Does not need to have a LIDAR upgrade, I think.
Edit: thinking of it more she does show this exact problem with other objects as well. Might need to debug her or get a decision-explaining version of her net installed.
Human drivers have less sensory capability than a machine driver can have, but we compensate with situational awareness from general intelligence.
For a machine, direct sensory knowledge of what's solid and how far away everything is has a lot more added value.
If the machine driver sees something it has little experience with, it can't reason about its properties, but if it knows it's a solid object in its path, it can take action in spite of that.
Don't 1.2M humans die in human-driven car accidents per year? And 20-50M suffer injuries?
Good enough for humans is one thing, but the promise of self-driving automobiles is that they're supposed to be able to be better than us and safer. Incorporating all the sensor data available shouldn't be looked down upon.
Also humans rely on hearing and tactile response data as well as vision when driving.
Humans have a deep and subtle mental model of How The World Works. Programs, even ones with 'deep learning', don't have that. Their understanding is actually fairly shallow, which is why it can be very useful to compensate for this using sensor equipment with superhuman capabilities.
Yes currently those models are just specialised in some special thing. If there really was a model that could understand the world like a human - well that would be artificial general intelligence
No. It’s intuition while you drive in snowstorm or heavy rain. Sane driver should pull over immediately and wait.
Source: driving fail videos and personal experience. I am really scared to drive slowly and safely during snowstorm, there is very real risk to be reared by some idiot flying twice the speed I drive.
In the end those are just mathematical models and not intelligence. Humans interact based off on their experience and it is really hard to find training data with a snow storm.
You do something similar in Udacity's SDC nanodegree but you use their simulator rather than a new car. Interestingly through trial and error on my project, ELU activation functions was the only thing that prevent vanishing gradient problems. You use the same activation function and I'm curious why you selected that one. I always wonder why it was the only function that worked for me.
problem is most likely that your initialization are bad (see e.g. https://openreview.net/pdf?id=H1gsz30cKX for explanation). make sure to use variance scaling, taking activation into account (relu cuts variance by half). probably you need to multiply all initial kernel weights by 2. make sure that initial prediction of model before training is same order of magnitude as typical target, not saturated to zero or other extreme value. batchnorm and skip connections can also ease problems of bad initialization, so worth trying.
Thanks for the answer! I also used Relu with no luck either. I figured Relu would work better than ELU but it didn't work for this application. Would you happen to know why ELU is superior to ReLU?
>A few days ago @karpathy presented their workflow with PyTorch and also said some numbers, to train the Autopilot system with all it neural networks you would have to spend 70,000 hours with a decent gpu - that is around 8 years (depending on which GPU you are using). In total the Autopilot is a system of 48 Neural Networks. When we compare this to what I will show you, you are gonna see that this is insane.
I'm very confident that it is not insane, for reasons that you have yet to discover, and the arrogance of calling insane inspires a fear in me considering what you appear to be doing on roads with other people.
Your project is very cool, but also very irresponsible. Are you using this on the road with other drivers? For the love of all things good, what are you thinking? Please clarify this. It's one thing to trust your own life to your creations, it's another thing to endanger everyone else's.
Whoa, before we start pulling out the pitchforks, I think this may be terribly distorting OP's words and intent.
- My reading of it was that OP was in awe at the immense scale of engineering that went into Autopilot to make it production-ready in contrast to his own project which gives a perspective into what it takes for a HelloWorld in this space (he comments on its limitations: "That is just lane keeping so far but it does quite a good job at doing it. It is still a bit weak when wanting to predict the angle when there is an intersection and it doesn’t see the next road and say there is an highway exit.")
- The github project looks to be using an existing recording of a car driving from the comma2k19 dataset and predicting the expected vehicular response. No pedestrians endangered.
- Not that this should matter in an ideal world, but it appears the author is a talented young programmer who's still in school. It feels a bit much to admonish a newcomer to the space for perceived arrogance and irresponsibility; and even if there were at all real critiques to be made here, I'm sure there's a less rude way to make that argument that wouldn't discourage students from their learning journey.
>The github project looks to be using an existing recording of a car driving from the comma2k19 dataset and predicting the expected vehicular response. No pedestrians endangered.
You sure? Yes, he's using a dataset for training, but nothing about his post indicates he's not running it the trained model on real roads. He claims to be testing the trained models:
>I did a few tests where pedestrians where suddenly crossing the road and the model gave it’s best job to not hit the human crossing the road.
How is he running these feedback tests? If the angle of the steering wheel changes the video input that is produced, how can he test that the AI would avoid hitting the human, unless he's testing in real life?
>My reading of it was that OP was in awe at the immense scale of engineering that went into Autopilot
He said how impossibly-long Autopilot's models take to train, and then goes on to say "When we compare this to what I will show you, you are gonna see that this is insane." It sounds like we're reading this differently, but to me, that sounds like "what they're doing is so over the top, you can achieve pretty good results in this space at a fraction of that, and I can prove it."
>It feels a bit much to admonish a newcomer to the space for perceived arrogance and irresponsibility
I don't care about what he's pursuing, and if you took it that way, then understand it is not my intention. I care that he is essentially putting a drunk AI driver behind the wheel of a car on roads shared by everyone, and endangering their lives. Apparently that's an unpopular opinion, given some of the responses ("big tech companies do it and bribe politicians to get away with killing people"). But I think it's the responsible opinion.
Again I am not testing on real roads. The sentence When we compare this to what" I will show you, you are gonna see that this is insane" should mean that those numbers from Tesla are insane. And my project is not insane. That's what it should mean
I think it is your use of the word 'insane' that is confusing to people, suggest rewording that passage to make it more clear what you mean. Right now to a non-native English speaker/writer/reader such as me it reads as though you think that what they are doing is not good and that what you do is better.
I'd argue there's a requirement of more than 48 neural networks in order for a self driving system to be able to drive.
As the number of actors in a scene (cars, motorcycles, signs, and pedestrians) increase, I'd be surprised if you wouldn't need something in the neighborhood of 100+ neural networks that have varying levels of importance triggering on the car.
one could argue that what the corporations are offering is also very irresponsible.
I have a hard time faulting an individual for taking on the effort while everyone seems to want to cheer on the big dogs in the same field -- the big dogs that have been routinely shown to make irresponsible decisions with regards to pedestrian and driver safety.
imo : until everyone is more regulated with regards to sensor-driven driving, have a ball. The carnage will probably fuel the legislation that's needed to stabilize the field, sadly.
Neat project, at a minimum this should get you hired somewhere fancy. I'd love for all the self-driving car vendors to take a leaf out of your book and keep their software in-house until it agrees with what the real drivers do > 99.99% of the time and the other 0.01% led to an accident of sorts.
That is nothing short of amazing. I sincerely wish you best of luck, do get that high school degree though, and more if you can, it will put you in touch with other smart people. When I had a company in Canada there were two people working for me that were still in high school - and later in University. The one went on to become a multi-millionaire by the ripe old age of 30, the other ended up doing lots of great stuff at Google. You are on the right track :)
I read in another comment that you are still in high school, so maybe the above is because you do not have actual driving experience. But this is not how human drivers drive.
Human drivers brake before the curves, while usually accelerate during curves. This improves stability.
This is something you may want to consider for your next iterations. In any case, congratulations for your impressive work!