Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From what I can tell, only Apple even wants to try doing any of the processing on-device. Including parsing the speech. (This may be out-of-date at this point, but I haven't heard of Amazon or Google doing on-device processing for Alexa or Assistant.)

So there's no way for them to do anything without sending it off to the datacenter.





> (This may be out-of-date at this point, but I haven't heard of Amazon or Google doing on-device processing for Alexa or Assistant.)

It was out of date 6 years ago.

"This breakthrough enabled us to create a next generation Assistant that processes speech on-device at nearly zero latency, with transcription that happens in real-time, even when you have no network connection." - Google, 2019

https://blog.google/products/assistant/next-generation-googl...


Alexa actually had the option to process all requests locally (on at least some hardware) for the first ~10 years, from launch until earlier this year. The stated reason for removing the feature was generative AI.

It's an obvious cost optimization. Make the consumer directly cover the cost of inference and idle inference hardware.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: