There are a couple of smaller "cloud" providers who do not ([1]). I think this might be simply because they lack the tech for traffic shaping here. So they usually implement a "fair use policy" (whatever that means). But yes, ingress and egress traffic is essentially offered for free.
[1]: Another one is gridscale.io, also from Germany
Your face is only authenticating you to your device because that's what you chose. If you don't want that (e.g. your identical twin sister loves pranking you) you can just use a different authenticator. The remote web site deliberately has no idea your face was involved, it just knows your identity was verified on its behalf by the hardware storing your private key.
I can have a near infinite number of passwords, but I only have 1 face, or 10 fingers. When all of my fingerprints are compromised, and the system only allows fingerprint login, now what do I do ?
> Can’t I just do all what docker-compose does with a Makefile
There's a lot of value in just being able to run a set of unified Docker Compose commands and have things work the same in every case with the same YAML configuration.
But technically yes you could replicate that behavior, however Docker Compose does a bunch of pretty nice things, like when you run docker-compose up it will intelligently recreate containers that changed but leave the others untouched. Then there's the whole concept of override files, etc..
It would take a fair amount of scripting to emulate all of that behavior along with getting all of the signal processing and piping multiple services to 1 terminal output acting well. Even today Docker Compose has issues with that after years of development.
You should run docker-compose with --verbose one day just to see what really happens under the hood. Compose is doing a lot. For example I wrote about this a while back. It also includes an example output of running a single service app with --verbose: https://nickjanetakis.com/blog/docker-tip-60-what-really-hap...
The benefit is standardization. If every team you join has a `docker-compose.yml` to run a local development environment you know exactly what it is, how to add things to it, etc. If every team you join has a custom solution for an extremely lightweight development environment than you need to be brought up to speed every single time.
It's a similar argument to "Why do you need Makefiles when you can just write a bunch of bash files and a meta-bash file to execute each sub script based on the stat results of a list of files?"
The recommended approach for managing groups of multiple containers with podman seems to be using pods (hence the name), which can be managed by Makefiles as you describe.
This is like asking "can't I just use a cellar + ice instead of using a fridge?".
Yes, through a lot of scripting, you can emulate docker-compose. But compose actually handles a lot of things, to the point that manual scripting doesn't really make any sense.
I don't understand how people manage multiple related containers -without- compose. It makes things so much easier.