That's not exactly what the car in [1] does. They use stochastic optimal control methods, which are more domain specific. They perform forward simulation on a lot of trajectories and effectively pick a good one. They also use localization so control is based more on current position than sensor inputs. The machine learning component is the dynamics model identifications - determining how the car reacts to control inputs. The model is basically a complicated function with a few inputs and outputs, which tend to be smoothly varying, so ml techniques work very well. This is fairly standard in model predictive control since empirical motion models tend to outperform ones that are physics based.
Edit: looking at the paper, they apparently use many physics based models of the car as a basis, but then use ml to mix the models together.
Unfortunately I don't know enough to see the difference, or even understand most of your comment :). I only read about ml for fun, I don't do anything with it! If understand it, the racecar computes potential paths it could take, while the drone looks at what caused it to fall vs. continue flying?
No worries :) the drone basically generates a big dataset of "crashing" and "not crashing" video clips from the camera. It then feeds all that into a convolutional neural net, which can (after training is complete) give control decisions based on the camera which avoid obstacles. This is very "black box" in the sense that it's hard to say exactly how the system is working.
The car, on the other hand, uses hand written algorithms to forward simulate various controls. Based on the forward simulations, it can pick controls which are predicted to give good results. Forward simulation relies on a model of how the car reacts to any possible control. However, this model is complicated because of the nonlinear dynamics going on (inertia, wheel slip, etc). Therefore, they use ml techniques to identify the model.
We write programs that predict how the car will drive given steering inputs. Because we're not sure, we write several programs that give slightly different answers.
Given a driving input, all the programs predict the future : Parent comment called this "forward simulation."
We pick the program that has worked well in the past and do what it says to do - that program drives the wheel of the car.
We measure what actually happens to the car. We then remember which algorithm actually gave us the right answers (might be different from the one we picked to steer) - next time, we'll trust that one more.
Because it's annoying to keep writing more programs, we figure out what we can tune - like a left / right balance knob on the stereo or a base / treble knob. In this case, it might be a "ground slipperiness" or friction knob.
So as well as picking the programs, we ask the algorithm to tweak the "friction" knob and try to pick a setting that seems to match reality.
---
In the flying case:
We make a "black box" full of sheets of numbers and put a picture into one side. Each dot in the picture does some maths with the first sheet of numbers which makes a new "picture" for the next sheet.
We run maths based on remembered numbers and the answer (say 0.0 - 1.0) tells us "safe to fly" or not. Lets say 1.0 is safe (0.0 unsafe, in-between unsure).
Once we figure out that a given picture was safe we go backwards through the sheets of numbers to apply "back propagation" and change them - we make the "safe" picture output something closer to 1. Perhaps it output 0.50 before, now that same picture outputs 0.51. If the picture was unsafe, we adjust the other way.
We do that LOTS of times. Eventually safe pictures output 0.91 and unsafe ones 0.12 or something. We show the computer a new picture, and we call the answer "Safe" (say 0.8-1.0) "unsafe" (0.0-0.2) and unsure (0.2-0.8). We fly only towards pictures which are "safe".
Everyone pops champagne. We didn't learn much - only that lots of numbers can solve more wacky problems than before. It's hard to generalise what the computer "learnt" or really understand it.
The bird thing is not as big of a deal as some seem to think. Wind turbines kill a couple hundred thousand birds each year [1]. In contrast, cats kill hundreds of millions of birds each year [2]. People still like cats. I also don't think the noise concern is a reason to discount wind turbines.
Of course, dirty alternatives have their own toll on wildlife.
Also, chickens are birds. Not wild birds, but birds. If the concern for birds is some kind of animal rights perspective, chickens are up into the billions per year. If people want to prevent needless bird deaths, that seems like the place to start.
I think the main issue for most people is biodiversity and ecological impact rather than sentimentality. Very few wild animals pass away quietly in their sleep.
True! I just read about the "decimation of Aethelwulf" - the ninth century king of Wessex (current Great Britain) gave away a tenth of his empire in order to secure the kingdom during his pilgrimage to Rome.
Cool! I like reading about that stuff, especially so since I've also been watching the semi-historical fiction of "Vikings". The show is enjoyable even when not historically accurate but it then gets me to read about the similar things that actually happened.
I watched "The Last Kingdom" from BBC (which I recommend) and it made me want to know more about 9th century England. So far the book "Alfred the Great: the man who made England" by Justin Pollard has been fantastic if you're into that kind of thing. It's very well researched and historically accurate but not overly dry, Pollard makes a point of not including too many footnotes in the book as many of the more academically targeted books do.
Reminds me of when a family friend of mine (a professor of civil engineering) built his own barn on his property. He got permits and did everything the ordained way, but still got hassled over tons of details. The inspector almost couldn't grasp the idea that the tens of thousands of stainless steel screws he had bought and used significantly exceeded the specs of the nails required by code. He did manage to persevere eventually.
Not sure if this was the issue, but some grades of stainless steel screws can't be used with treated lumber because of an increased corrosion risk. The rule might have been in to simplify inspections.
"All stainless steels may not be acceptable for use with preservative treated wood. Testing has shown that Types 304, 305 and 316 stainless steels perform very well with woods that may have excess surface chemicals. Type 316 stainless steel contains slightly more nickel than other grades, plus 2-3% molybdenum, giving it better corrosion resistance in high chloride environments prone to cause pitting such as environments exposed to sea water."
Nails have higher shear strength and screws have higher tensile strength. That's the reason nails are used for framing and those nails are typically covered in a heat activated cement to prevent pull out.
My aunt's also a civil engineer, and she has a much more mundane explanation for why nails are OK and screws are not: if you use nails, someone's already calculated how many you need to ensure the structure isn't going to fall down. The strength of the nails has been calculated, their material properties are known, etc. The engineering work has been done.
If you use screws, you could have an engineer sign off on that structure (and then the inspector would let it pass), but the engineer would need to:
1) Find a data sheet on the screws you're using,
2) Do the calculations to show that you're using enough of them, and in the right places, to ensure the structure will stay standing,
3) Be willing to then sign off on the structure.
Depending on the screw, that data sheet may or may not exist.
I'm surprised your family friend didn't know this, or left that detail out of the story. Maybe he worked in another branch of civil engineering (sidewalks, sewer, etc) from structural engineering?
tl;dr: If you can pay an engineer to do the calcs (and sign off on it) to show that your screw-based structure will stay standing, the inspector won't be any problem.
Is that so bad? Granted maybe it's annoying looking at old threads, but as a user, the ability to go through and remove old and potentially regrettable posts is quite welcome.
Maybe a good compromise would be to remove the user information after a certain time period (~2 years). Hashing the username salted with the post title would be a decent way of systematically respecting user privacy while also keeping old threads readable. I wouldn't mind if HN did this.
You'd be surprised how much you could piece together with obfuscated (but still unique) usernames. I'd be in favor of your system if the hash was salted with the article's id, so that the hash of my username in one article was different than the hash of my username in a different article.
One day I'm going to run for office and I'm going to have to get lawyers to scrub HN of all my comments because they have no way for users to manage their content :-)
Yeah that was my reasoning. The salted hash would be an easy way to implement single-thread username consistency.
Edit: for better readability it could be further mapped into a table of human readable handles, similar to how Google does the "Anonymous Lemur" thing in gDocs.
I'm not sure if rights is the correct term to use here. Those are choices made by the host, the experience they want to provide; and the members who have chosen to participate, accepting the terms of the host, aren't they? Well, I guess rights of some sort are involved, the commenter ceding copyright or some such to the host.
I'm not a lawyer, and admittedly haven't taken the time to look up the appropriate legal terms and other minutiae involved, rather relying on the kindness of my fellow commenters to extend to me the benefit of the doubt with respect to what I'm getting at, and helpfully clarify anything that needs to be. Thanks :)
I don't have any right to that when I'm a user. However I run various sites, and know that it is a terrible experience for people coming from Google, so I do not allow it on my own site. Once users submit their content they lose all rights to having it be modified or removed.
I agree. Interns also generally receive no extended benefits like 401(k), stock, medical, and so on, so the comparison isn't very fair in terms of total comp.
I mean free car rental. I got one at Oracle. I presume it's common as it's very hard to get around in the US without a car - often not much public transport.
I've had several big name tech internships, and I got an internship offer from a bank to do tech work. The bank offered around 2/3rd the comp with significantly worse benefits, which surprised me considering that a) they were in NYC b) they were a big name and c) they approached me, not the other way around. Not sure if they pay the trader type interns more...
It's pretty well known that top tech companies pay more than top banks. SWE interns at top tech companies will make more than trading interns at top banks and entry-level SWEs at top tech companies will make more than entry-level traders at top banks.
Is there any way that I can privately chat with you? I will soon have to decide whether to get into HFT or traditional SWE work. I have a good idea of what a traditional SWE career path entails, but HFT is quite obfuscated (from my position).
Can you respond to this message with an email address or put one in your profile? Feel free to use a temporary email address, but I won't say anything controversial anyway as I'm not interested in losing my job.
I believe most banks pay tech interns the same as trading interns. I've heard that Capital One pays tech interns the most out of the big banks, but I have no personal experience with that one.
The expected value of both strategies is to have 1 success, but they have different variance. The first strategy has variance of 0.99, and the second strategy has variance 0. The chance of n out of 100 hits with the first strategy is: binomial(100,n) x 0.01^n x 0.99^(100-n)
edit: asterisks as multiplication signs => italics
I built a desktop with a quad core i5, 16gb ram, a GTX970, an Intel SSD, and around 2tb of spinning disk storage. The whole build cost around $700 and it's a very fast machine. I also have 2x 1440p 25" monitors, a mechanical keyboard, and recently added a (normal looking) gaming mouse. I use it primarily for programming/research and occasionally games/oculus rift. Also makes a great web server.
I run Ubunutu, with i3 window manager. I really like this combo after some personalization. It's very stable, and lightweight. I also have two Windows versions and a secondary Linux installation which come in handy.
I also have a 2013 15" rMBP of which I think very highly. I can mount my 2tb of desktop storage, and my SSD as a network drive on the MacBook for sharing files. I also use SSH to run intensive scripts (sometimes GPU stuff) on the desktop.
Edit: looking at the paper, they apparently use many physics based models of the car as a basis, but then use ml to mix the models together.