Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How speedy is the rust-language tooling itself these days? I remember wishing for a 'optimize nothing' or even 'just interpret' mode. Compile times noticably contributing to the feedback loop are a serious killjoy.


Compile times are still a bit much, but there are ways around it:

- The insanely long times are only for the initial build. Further builds are incremental and more tolerable.

- Compilation errors can be checked using 'cargo check'. It avoids the code generation step that's part of a build. I find myself doing it way more often than builds. So it's a time saver depending on how frequently you use it.

- You can extend the incremental build mentioned above using sccache [1]. At the minimum, it allows you to share the build cache between all your local projects. So it saves time if your projects or other builds share a lot of common libraries (that's very common in Rust, though). But sccache can go further by using online build caches (for example, using S3) that can be shared between hosts. Finally, sccache also supports distributed builds if you have a few machines sitting idle. (This is like distcc with extra features).

[1]: https://github.com/mozilla/sccache


There have been significant (if not earth shattering) improvements in the compiler itself. But for me at least, the bigger change has been from better hardware. I now have a processor (Apple M1 Pro) that's 10x (multi core) / 2x (single core) faster than the one I had when I first started programming using Rust (intel dual core processor in a 2015 MBP) and that seems to have translated almost perfectly into faster compile times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: