Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What prevents you from having a build server that uses AMD cpus? Budget or physical space limitations? Power consumption concerns? Too complicated to have different build and test boxes? Large binary and debug symbols that would result in no effective gains after network transfer time was accounted for?


In theory, nothing, and that's a good idea that I've considered. My past experience with distributed builds (distcc, icecc) hasn't been that great. They've worked, but they tended to break down or slow down, and I ended up spending more time maintaining my setup than I gained in compile times. Perhaps things have improved. I still have bad memories of trying to diagnose why the network transfers had slowed to a crawl again (on a local gigabit network.)

The other demotivator is noise. I work from home, and the other family members who share my office aren't keen on the sound of a fully spun up build server. I could run ethernet cable under the house to another room, maybe. (Relying on wifi bandwidth hasn't worked out very well. If only debuginfo weren't so enormous...)


> network transfers had slowed to a crawl again (on a local gigabit network.)

10 gigabit is pretty cheap nowadays. Just in case your problem could simply be solved with higher bandwidth...


It very rarely hit the actual bandwidth limit. It would start out close to it for a while, then drop down. And down. And down. Until it was using like 2% of the full bandwidth, but never completely stalling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: