I was curious how large these static binaries would be since the last announcement. Being honest, 100MB is quite a bit larger than I was expecting. Probably fine for CI business apps and such, but kinda rules out a lot of linux util type things.
I seem to have missed the part where he explains why he's targeting Alpine Linux for his Swift app. Alpine isn't mainstream (the way Ubuntu or rhel are) and stands out mostly for being compact and clutter free.
So... indeed... If you somehow end up with 100MB package for what is essentially just 250kB, I'd say something did not go right. Feels very square peg in round hole.
If I had to guess, Alpine is a very popular choice for building container images in Docker/Kubernetes/whatever the new hotness is since I last worked with containers. Mostly because the aforementioned small size and low overhead add up if you’re at any sort of scale (even one instance on top of your desktop OS).
If you’re wanting to containerise the program, maybe it’s less resource intensive to add those things to Alpine than to run another distro with more support? Obviously only speculation
Doesn't Kubernetes deduplicate layers by hash? I thought the key to minimizing overhead was standardizing on a limited set of images across everything you'll be running on the same host.
Just from reading HN it seems like Alpine had a brief fad a few years ago but never got much traction.
the underlying container runtime (usually containerd) will dedupe shared layers, but there's a lot of things you don't get to directly control, like third party apps, and the bottleneck when spinning up new nodes is real. Plus, envs where there isn't that much caching, like CI.
The Alpine Linux docker image is 7.8 MiB to Ubuntu's 76 MiB, so they get a savings of about 68 MiB in image size by using Alpine Linux over Ubuntu.
From the stats in TFA it looks like about 43 MiB of the file size is the Swift runtime itself, which would need to be installed in any OS. This leaves ~57 MiB extra in their static binary approach vs what they'd get out of dynamic linking.
68 MiB (saved by using alpine) - 57 MiB (lost to static linking) = 11 MiB (net gains from Alpine), so their Alpine Linux solution is actually about 10% smaller than the equivalent that uses an Ubuntu image.
Is that worth the extra work they put into it? It probably depends on the application.
Depending on how builds and deployments are done, there is a high likelihood that the lower userspace layers are much slower moving than the upper application layers. After your second build, dynamic linking wins and everything after that it pulls even further ahead.
Smart builds can make application deltas really small. I helped design a system where our several hundred MB monolith could be hot patches with a layer of a few thousand kilobytes and most builds were 10-20MB. Obviously this wouldn’t have worked for a statically linked app.
Because as much as it is great, musl isn't a drop-in replacement for glibc and requires extra work from maintainers to get software working on it (see PyTorch, AWS CLI etc. etc.).
My smallest Go CLI is 1.6M. This is probably about as small as you can get in Go and still do something useful. Some of my other (larger) Go CLIs range from ~2.5M to ~6.5M. Go is not known for producing small binaries.
For fun, I made an executable hello world with SBCL and it came out weighing 40MB (no compression, of course); this also includes Unicode data and a complete compiler. Something's wrong here.
I thought Swift compiles to machine code? Can't it eliminate the unused stdlib code?
Well, I guess not if the statically linked binaries are so large, but this seems like the more major reason for these very large binaries instead of stdlib being monolithic? (Not entirely sure what that means in this context)