Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get that, but the solution is still possible and should be acceptable -- keep a One True Copy of a .multirust directory which you install on all testing servers and dev setups. No network necessary. Enterprise setups already do far more complicated things with pypi clones and whatnot.

Again, having rustup installed with two preinstalled and pinned compilers should not be much different, acceptability-wise, from having one preinstalled and pinned compiler.



I humbly suggest that saying something "should be acceptable" when it's been explained at length that, regardless of whether you think it's acceptable, other people do not think so, is exactly the problem yazaddaruvala was describing above. If you are trying to market to a population, you need to take their needs seriously with a good faith effort. Not understanding is fine. Doing a cost-benefit analysis and determining there's just no resources for what's needed is fine. Telling people that their needs are unjustified doesn't come across very well to those that still feel those needs.


I'm saying it should be acceptable in the parameters presented to me so far. Please explain why two compilers with a switching program is bad, whereas one compiler isn't. I am trying to understand the needs and come up with a solution, but so far they have been presented in bits an pieces. I am working off what has been told to me so far and my own experiences in locked down environments, which admittedly weren't as locked down as the ones you describe. I am also comparing with what other "enterprise-ready" languages require for tooling to try and ubderstand what is "acceptable". You keep telling me that I am not aware of the needs of enterprise users. Fine, I concede that. Please explain them to me. My intention in providing possible "acceptable" solutions is to find out what use case prevents them from working and see how Rust can address that. I am not telling people their needs are unjustified.

...this is also why I implored it to be further discussed in email, while the media are similar I am able to discuss more clearly there.


> Please explain why two compilers with a switching program is bad, whereas one compiler isn't.

Having any output that's based on a person remembering to set a configuration is less useful in these situations than having the configuration hard coded. (you don't want someone hunting down the correct config in some company wiki, much less working from memory. At most you want a config copied from a repository of configs).

Having a utility download a binary or source from the internet is less useful in these situations than serving it locally. (You can't control that the remote resource is still there, is the same, is secure).

Knowing that the exact same stuff that works in whatever development environment(s) you have (local, shared, both) works the same as when it's pushed to some further resource (automated build server, automated test harness, etc) is important in preventing bugs and problems.

In the environment I had experience with, we had a mainly Perl ecosystem running on CentOS servers. We created RPM packages for every perl module we pulled in as a dependency for our system or back-end RPC servers. Everything we installed on these boxes got an RPM built if at all possible. We maintained our own yum repository that we installed these packages from. While trivial to run cpan to install a module on these systems, that was not deemed acceptable for anything going into production. Rustup would not have been allowed into production here, nor on some of the shared dev and testing boxes we had, since that wouldn't lead to being able to build the exact same binaries easily and definitively. The absolute last think you want is a problem that loses data/config, and to find that you're not sure how to reproduce the last build environment exactly.

Does rust make assurance that things should build correctly with later versions? Yes. Does that really matter when you're talking about hundreds of thousands to millions of dollars? Not without a contract. That's one reason why enterprises use distributions that provide this feature, and build their own packages and deploy them where those distributions fall short.


> Having a utility download a binary or source from the internet is less useful in these situations than serving it locally

I already addressed that; I'm using rustup as a toolchain manager, not for downloading. You can have a .multirust directory that everyone uses and use rustup to switch. You could alternatively set rustup to download from a internal server that contains a reduced set of preapproved binaries only. I'm assuming that the external network is turned off here; it usually is in these situations. If you want a tool that manages toolchains but does not contain code that accesses the internet, that's a more stringent requirement that rustup doesn't satisfy; though I do hope Rust's distro packages work with update-alternatives.

You can also create an rpm for your specific two-compiler setup, of course. That's annoying though.

> Having any output that's based on a person remembering to set a configuration is less useful in these situations than having the configuration hard coded

Set default to stable, and `alias fuzz='rustup run nightly cargo afl'` :)

Again, this is for tooling, you can easily paper over the fact that the tool uses a different compiler.

Rust does have a tool (crater) that runs releases on the entire public ecosystem before releasing. There is an intention to make it possible for private code to take part in this too and provide feedback. But you're right, it is still unreasonable to expect enterprise users to trust that updates will work :) Any stability guarantee isn't enough, since bugs happen.


> I already addressed that; I'm using rustup as a toolchain manager, not for downloading.

I missed that, but see no problem with that.

> You could alternatively set rustup to download from a internal server that contains a reduced set of preapproved binaries only.

It's better, but less idea from a system maintainer's point of view than distro packages, because multiple package systems (which is essentially what rustup is in that use) is more work. It may be the best overall solution depending on how well multiple rust distro packages can be made to play together. Distro packages to provide the binaries and rustup (or something) to just switch between local binaries would be ideal for the system maintainer that wants control. I understand it's less ideal from the rust developer (as in someone that works on the rest ecosystem, as opposed to someone that works in the rust ecosystem), because the goals are different.

> If you want a tool that manages toolchains but does not contain code that accesses the internet, that's a more stringent requirement that rustup doesn't satisfy

Less that it can't (but that is a reality some places), more that it definitely won't, and someone exploring in it won't make it do so accidentally. Don't let the new dev accidentally muck up the toolchain.

> You can also create an rpm for your specific two-compiler setup, of course. That's annoying though.

Annoying for those wanting to get new rust versions out to people, annoying for devs that want the latest and greatest right away, but only slightly annoying, and easily amortized, by those that need to support the environment (ops, devops, sysadmin, whatever you want to name them).

> Set default to stable, and `alias fuzz='rustup run nightly cargo afl'` :)

If I was tasked on making sure we had some testing system in place that ran some extensive code tests in QA or something, or for a CI system, I wouldn't rely on that. It works, it's simple, but when it breaks, the parts I have to look at to figure out why and how, as all over the place, and deep.

Did someone change rustup on the system?

Did someone change the nightly that's used on the system?

Did someone muck up the .multirust?

If any of those happened, what was the state of the prior version? What nightly was used, what did the .multirust look like, did a new nightly catch a whole bunch more stuff that we care about, but aren't ready to deal with right now and is causing our CI system problems?

Theoretically I would build a $orgname-rust-afl RPM, and it would have a $ordname-rust-nightly RPM dependency. $orgname-rust-afl would provide a script that's in the path, called rust-afl-fuzz which runs against the rust-nightly compiler (directly, without rustup if possible. less components to break), to do the fuzzing. RPM installs are logged, the RPM itself is fully documented on exactly what it is and does, and after doing so once, all devs can easily add the repo to their own dev boxes and get the same setup definitively, and changing the RPM is fairly easy after it's been created. DEB packages shouldn't be much different, I don't expect other distros to be as well.

What did I get out of this? Peace of mind that almost no matter what happened if my pager went off at 9 PM (or 3 AM!) that weird system interactions, automatic updating, stupid config changes, etc weren't likely the cause of the problem, and if worse came to worse, I could redeploy a physical box using kickstart and our repos within an hour or two. When you have a pager strapped to you for a week or two at a time, that stuff matters a lot.

To achieve this we went so far as to develop a list of packages that yum wasn't allowed to automatically update (any service that box was responsible for) when everything else auto-updated, which were automatically reported to a mailing list as having updates so someone could go manually handle those by removing one of the redundant servers from the load balancing system at a time to update and restart the service (if not the server), re-join to the load balancer, and then move to the next server, for zero downtime updates.

The yum stuff was handled through a yum-autoupate system postrun script (a cron job script provided CentOS specific package). Yum auto-update didn't support this feature, so we added as a feature of that configuration, made our own RPM to supersede CentOS's version of the package, and then created another RPM for the actual postrun script to be dropped in place, and added them to our default install script (kickstart). We were able to drop support for our version of yum-autoupdate when CentOS accepted our patch.

All that's really just a long winded story to illustrate that sysadmins like their sleep. If your tool reduces the perceived reliability of our systems, expect some pushback. If your tooling works well with our engineering practices, or at least we can adapt it easily enough, expect us to fight for you. :)

Rustup is great, but when I was at this job, I would have had little-to-no use for it (besides maybe looking at how it works to figure out how to make an RPM, if one didn't exist to use or use as a base for our own). I know, because that's the situation perlbrew was in with us.

> Again, this is for tooling, you can easily paper over the fact that the tool uses a different compiler.

Sure, but in this scenario papering over is less important than easily discoverable and extremely stable.


> Distro packages to provide the binaries and rustup (or something) to just switch between local binaries would be ideal for the system maintainer that wants control

I think in this case update-alternatives or some such would be better? Not sure if it can be made to work with a complicated setup that includes cross toolchains. But I agree, in essence.

But anyway the local rustup repo thing was just an alternate solution, I prefer distributing the .multirust directory.

> Don't let the new dev accidentally muck up the toolchain. > ... > If I was tasked on making sure we had some testing system in place that ran some extensive code tests in QA or something, or for a CI system, I wouldn't rely on that. It works, it's simple, but when it breaks, the parts I have to look at to figure out why and how, as all over the place, and deep.

abstracting over rustup fixes this too. Keep .multirust sudo-controlled and readonly, don't make rustup directly usable, and just allow the tooling aliases.

> directly, without rustup if possible. less components to break

yeah, this is possible. It's pretty trivial to integrate afl-rs directly into a compiler build, perhaps as an option. You can then just build the stable branch of rust (so that you get backported patches) and use it.

> Rustup is great, but when I was at this job, I would have had little-to-no use for it

Right, as you have said there are other options out there to handle this issue :) Which is good enough.

When you do care about reproducibility but don't want to repackage Rust, rustup with a shared readonly multirust dir can be good enough. Otherwise, if you're willing to repackage rust, that works too :)


Sure, and to be clear, rustup works perfectly fine for my current needs. I just play around with it a bit when I have time, and even if I was to use rust in production, I would use rustup in my current environment (where the dev team consists of me, myself, and I ;) ). Almost all the benefits of controlling the packaging go right out the window when there's very few devs involved and they aren't expected to change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: