I would encourage you to write about it as well. It seems interesting and unconventional.
I used to tinker a lot with my systems but as I gotten older and my time became more limited, I've abandoned a lot of it and now favor "getting things done". Though I still tinker a lot with my systems and have my workflow and system setup, it is no longer at the level of re-compiling the kernel with my specific optimization sort of thing, if that makes sense. I am now paid to "tinker" with my clients' systems but I stay away from the unconventional there, if I can.
I did reach a point where describing systems is useful at least as a way of documenting them. I keep on circling around nixos but haven't taken the plunge yet. It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments. So your approach is intriguing.
> It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments.
They absolutely are! I actually originally just wanted a base container image for running services on my hosts that a.) I could produce a full source code listing for and b.) have full visibility over the BoM, and realized I could just ‘FROM scratch’ & pull in gentoo’s stage3 to basically achieve that. That also happens to be the first thing you do in a new gentoo chroot, and I realized that pretty much every step in the gentoo install media that you run after (installing software, building the kernel, setting up users, etc) could also be run in the container. What are containers if not “portable executable chroots” after all? My first version of this build system was literally to then copy / on the container to a mounted disk I manually formatted. Writing to disk is actually the most unnatural part of this whole setup since no one really has a good solution for doing this without using the kernel; I used to format and mount devices directly in a privileged container but now I just boot a qemu VM in an unprivileged container and do it in an initramfs since I was already building those manually too. I found while iterating on this that all of the advantages you get from Containerfiles (portability, repeatability, caching, minimal host runtime, etc) naturally translated over to the OS builder project, and since I like deploying services as containers anyways there’s a high degree of reuse going on vs needing separate tools and paradigms everywhere.
I’ll definitely write it up and post it to HN at some point, trying to compact the whole project in just that blurb felt painful.
Not what was mentioned by parent but I've been working on an embedded Linux build system that uses rootfs from container images: https://makrocosm.github.io/makrocosm/
The example project uses Alpine base container images, but I'm using a Debian base container for something else I'm working on.
Honestly this is just sorta a Tuesday for an advanced Gentoo user? There are lots of ways to do this documented on the Gentoo wiki. Ask in IRC or on the Forum if you can't find it. "Catalyst" is the method used by the internal build systems to produce images, for instance https://wiki.gentoo.org/wiki/Catalyst.
You point out a question that I spent months thinking about. I personally love Postgres, heck I initially even had a version that will talk postgres wire but with SQLite only syntax. But then somebody pointed me out my WordPress demo, and it was obvious to me that I have to support MySQL protocol, it's just a protocol. Underlaying technology will stay independent from what I choose.
I always run my agents in a container with the source code directory mounted. That way I can reasonably be confident I may let it work without fearing destructive actions to my system. And I'm a git reset away to restore source code.
I'm working on a multi sig file authentication solution based on minisign. Anyone knows the response of the dev regarding minisign's listed vulnerability? If I'm not mistaken, the response of the authors are not included in the vulnerabilities' descriptions.
Because the authors found out about it by chance on Hacker News.
That said, these issues are not a big deal.
The first one concerns someone manually reading a signature with cat (which is completely untrusted at that stage, since nothing has been verified), then using the actual tool meant to parse it, and ignoring that tool’s output. cat is a different tool from minisign.
If you manually cat a file, it can contain arbitrary characters, not just in the specific location this report focuses on, but anywhere in the file.
The second issue is about trusting an untrusted signer who could include control characters in a comment.
In that case, a malicious signer could just make the signed file itself malicious as well, so you shouldn’t trust them in the first place.
Still, it’s worth fixing. In the Zig implementation of minisign, these characters are escaped when printed. In the C implementation, invalid strings are now rejected at load time.
I finished the library providing all features of a multisig file signing scheme. With that it was easy to develop a cli tool. And now I'm looking at developing the server component.
Looking forward to share a complete solution! Git backed, decentralized, no account creation needed (auth by key pair), open source and self hostable!
>with swarm and traefik, I can define url rewrite rules as container labels. Is something equivalent available?
Yep, you define the mapping between the domain name and the internal container port as `x-ports: app.example.com:8000/https` in the compose file. Or you can specify a custom Caddy config for the service as `x-caddy: Caddyfile` which allows to customise it however you like. See https://uncloud.run/docs/concepts/ingress/publishing-service...
>if I deploy 2 compose 'stacks', do all containers have access to all other containers, even in the other stack?
Yes, there is no network isolation between containers from different services/stacks at the moment. Here is an open discussion on stack/namespace/environment/project concepts and isolation: https://github.com/psviderski/uncloud/discussions/94.
What's your use case and how would you want this to behave?
I like that I can put my containers to be exposed on the traefik-public network, and keep others like databases unreachable from traefik. This organisation of networks is very useful, allowing to make containers reachable across stacks, but also to keep some containers in a stack reachable only from other containers on the same network in that same stack.
Secrets -- yes, it's being tracked here: https://github.com/psviderski/uncloud/issues/75 Compose configs are already supported and can be used to inject secrets as well, but there'll be no encryption at rest there in that case, so might not be ideal for everyone.
Speaking of Swarm and your experience with it: in your opinion, is there anything that Swarm lacks or makes difficult, that tools like Uncloud could conceptually "fix"?
Swarm is not far from my dream deploy solution, but here are some points that might be better, some of them being already better in uncloud I think:
- energy in the community is low, it's hard to find an active discussion channel of swarm users
- swarm does not support the complete compose file format. This is really annoying
- sometimes, deploys fail for unclear reasons (eg a network was not found, but why as it's defined in the compose file?) and work the next try. This is never lead to problems, but doesn't feel right
- working with authenticate/custom registries is somewhat cumbersome
- having to work with registries to have the same image deployed on all nodes is sometimes annoying. It could be cool to have images spreading across nodes.
- there's no contact between devs and users. I've just discovered uncloud and I've had more contact with its devs here than in years of using swarm!
- the firewalling is not always clear/clean
- logs accessibility (service vs container) and containers identification: when a container fails to start, it's sometimes harder than needed to debug (esp when it is because the image is not available)
This is based on the Chromium Embedded Framework. I've always been surprised this kind of framework was not encouraged for Firefox by Mozilla (I've read they were even against it).
reply