Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think they do. One one hand, for building complex embedded systems it sure is nice to be able to get an Arm processor, Linux a high level language etc. Getting all that setup takes time, but it's worth it since it since the rest of the development goes so much faster.

But, I still use a ton of AVRs and similar small memory micros. For each big micro I use, they have a few small micros in support. It's so much easier to do hard real time without an OS, or pipelines, etc. When you can just count instructions and know the timing.

As the bigger micros continue to move down market, the small ones will get even smaller and lower power and open up new markets.



there are already quite a few multi-core microcontrollers available ,why not use them instead?


Currently I'm working on instrumentation amplifiers and the analog signal can't travel very far. So each one has its own AVR controlling it. Then there is a high powered micro that collects and stores all the data.

Or, there may not be enough pins. May use a small micros to watch all the user interface buttons (think like a keyboard controller) which communicates to the large micro.


A stereotypical answer FigBug didn't mention was interrupt latency is much faster and jitter in the interrupt latency sometimes doesn't even exist.

Also, closely related to above, simplicity and modularity. Don't want to be the guy debugging why the antilock brakes occasionally miss an encoder transition when the engine RPM fuel injector routine is just perfectly the wrong interrupt rate. Its very much unix vs windows philosophy WRT monolithic-ness.


I think multi-core mcu's have interrupt independence per core.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: