Hacker Newsnew | past | comments | ask | show | jobs | submit | _rob's commentslogin

Wow that suspension is amazing. If you don't have the time to watch the entire article you should at least watch the video demonstration.

I wonder why they don't integrate it into ambulances. In that application cost or weight shouldn't matter as much.


I assume he means more physically, and in that sense he is correct. You can use your smartphone or tablet basically anywhere and start using it on short notice. Something that you cannot really do with a tablet or laptop.


You might know more as a pilot but I was under the impression that birds don't get out of the way of big loud flying things. I found a source about birds not getting out of the way for cars once the cars are beyond a certain speed, but I couldn't find anything for airplanes.

http://rspb.royalsocietypublishing.org/content/282/1801/2014...


Can you give me an example?

And why not have all the features that can be done locally be done locally. If it's possible for my computer to understand me entering an apportionment, why should that go to a MS sever to be stored forever?


How do you think your computer can understand 'entering an appointment'?

There's a lot more that goes into understanding than JUST speech recognition. First of all, speech recognition by itself isn't exactly trivial, and that's become more and more obvious as we've seen the smallest accent mess with the digital assistants on all the major phones. Yes, technically, Dragon Naturally Speaking existed a decade ago and worked somewhat, but needed a LOT of training, and was dumb as a brick. It doesn't compare.

But beyond that, understanding the meaning of the spoken word is difficult too. Yes, NLTs exist, and they can be very good, but you really need something that a team is administering. They can identify pain points and do regular updates to help... things like an odd band name that is ALWAYS misunderstood, some odd combination of words that confuses a question with a 911 call, etc., otherwise you're just going to end up frustrated.

I should also mention that a digital assistant really needs the power of a full search engine behind it. This allows for auto-correction of mispronounced words, but it also allows near-instant lookups for relevant information. If this was running on your local machine, not only will the processing be slow for some things, it will also be more limited in it's ability to fully process all possible meanings, and it will need to be updated CONSTANTLY.

These companies, by putting the language processing in the cloud, are throwing teams and hardware at the problem, and yet they STILL have embarrassing difficulties when it comes to actually understanding sometimes. Consider that for a moment... hundreds, even thousands of servers running the latest software for processing natural language for multi-millions of people aren't capable of getting your meaning 100% of the time.

Incidentally, I realize that there some open source projects out there that do some rudimentary voice recognition and processing, however they suffer from the same issues addressed above and are MUCH more limited in many many ways. Many of them still make use of cloud-based services for processing the audio, btw. The one advantage, I will say, is that you have to ability to add your own custom commands and actions, which the major systems obviously don't allow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: