I think we all understand the usefulness of a road-warrior-style VPN. But it doesn't seem so clear what k8s is adding here?
Anyway, on the topic of scalable UDP services, does anyone have any experience of load balancing a UDP service? Because UDP is connectionless there's no obvious way to make UDP packets "sticky". Are there any established practices that could help scale this k8s Wireguard service to 2 or more containers?
Load balancing UDP isn't too difficult. However that is not the hard part here. It is ensuring the routing happens correctly.
A client must hard code it's IP address currently, which means if it can connect to more than one node, then it is unclear which path a response from a server should take to get back to that client. Each VPN instance could run NAT, but then users would never be able to talk to each other.
Wireguard makes this significantly harder than say ipsec. WG has nothing to indicate when a client connects. And there is no dead peer detection, so you cannot tell one a client disconnects. IE. Scripting something to update a global routing table to say which sever has which client is near impossible.
I use wireguard daily for personal stuff. However I cannot think how I would make it work in an active-active situation besides NAT, which I don't want.
Well if you make it a DaemonSet you could technically use the container as the network interface of other containers throughout the whole cluster. That said, I'm very happy that his example k8s deployment uses secrets.
I didn't know Ubuntu 20.04 back ported WG into its 5.4 kernel. I spent a few hours yesterday fixing a node after breaking ZFS because I upgraded to 5.6 for WG support. I feel rather silly now..
That's an interesting idea about using a unified network interface. Do you know how you might then get the right packets to the right containers/processes? Does that even matter with Wireguard?
Wireguard, inspired by MoSH, handles reconnections especially well. I guess, TCP flows tunnled through UDP might be reset depending on which server (behind the load-balancer) is handling them?
As for what k8s adds here I don't know, but this thing adds to k8s knowledge one interesting fact: It can be useful to run container that does not contain any process doing useful work ;)
Yes! When you think of Wireguard and Kubernetes, you should think of Lucas! He spends a lot of his free time experimenting with the combination of these two technologies. At KubeCon EU Barcelona, he gave a talk about cross-cluster networking using Wireguard: https://www.youtube.com/watch?v=iPz_DAOOCKA
A few people seem to be confused why K8s is needed when you can just run this on the OS itself. I think they miss the point that this is not a guide to setup Wireguard using K8s but setup Wireguard if you only have/want a K8s environment.
As the author notes: "you can run a road-warrior-style Wireguard server in K8s without making changes to the node."
Which makes this guide ideal for me. I run a lightweight K8s flavor (K3s, https://k3s.io/) as "configuration management" on my home server and home automation Raspberry Pi's because I don't want to mess with OS/userland configuration or the associated tools (Puppet, Ansible, hacked together scripts, etc) or want to maintain any OS state manually.
For my setup I just flash K3s to disk or SD card and let it join the cluster. Everything else is configured in Kubernetes and stored nicely as configuration files on my laptop so I have an overview of everthing and can modify/rebuild whenever I want.
You say you don't want to use Puppet or Ansible but you are basically using kubernetes manifests for the same exact reason: configuration management. I know it can be funny and I totally support it but I thought it should be pointed out anyway.
The problem I have with traditional configuration management is that in the end, even if it's declerative, you are stil modifying a imperative OS/userland. So it will collect state at some point. Things like undoing changes with those tools is not that trivial. You have to actively reverse them in your configuration. Which turns nice CM code into mess. Want to try out something quick? Better not be afraid it messes up your OS/userland as there is no simple undo.
So since I'm doing isolation in containers/Docker already it's a small step to a lightweight Kubernetes. What Kubernetes gives me on top of that is that I can consider everything below the application layer as a declarative API.
That's maybe the theory, but in reality, the only thing Ansible hopefully is, is idempotent between playbook runs - but there are no guarantees there, at all. Only in very simple setups things can be fully declarative in it's totality.
Don't have much puppet experience, but I can't count the times anymore that I've had to add steps to playbooks just to determine stuff used in one of the following steps. The other option was to write a snowflake Ansible module. The individual steps/plays might be declarative, the playbooks are not.
They look declarative but every Ansible playbook I have ever read or written has involved some imperative code. And even if you only use it in a declarative fashion, it doesn't change the fact that it's very much a step-by-step ordered list of things to install.
The declarative syntax is certainly a step up from shell scripts, but it's not as pure as K8s.
Ansible is only declerative on a action level. At a playbook level it's imperative. You can install and remove the same package within a playbook, the outcome will be dependent on the order.
Puppet is fully declerative but for me it lacks an easy way to undo changes. It would be nice if it could work like Terraform where it keeps a 'state' of all changes it made in the past so when you remove a resource from your config it could 'undo' the change.
I still use Puppet (mostly with Bolt nowadays) for systems that don't fit Kubernetes, but they're becomming less and less.
My main annoyance with Tailscale is the reliance on Google. I need to refresh my memory, but I think this makes a VLAN shared with other people impossible.
Honestly that's the least of all problems and catastrophes of Tailscale. You must have 1000% of confidence in their own servers security, if the published public keys hosted on their servers have been tampered then the entire network is compromised. Also, if their service is down, you will be unable to connect to your network even if it is completely fine and working.
Tailscale is open source, it should be possible to set up your own server.
The hosted Tailscale product is meant for GSuite customers who want an peer-to-peer VPN with corporate SSO. Yes, you have to trust them - SSO login is inherently centralized. My company uses it, it works great.
I am not really sure you understand how it works. There is no hosted/not hosted versions of it. You must connect your "opensource" client/agent through their coordination servers hosted by them to host and publish the public key to the other devices in your network and you can not skip their service. So Tailscale is effectively as opensource as any commercial opensource VPN client. It's entirely useless when not used with their commercial service and users have zero control over the software unless when used with their servers. The "open source" thing is great from a marketing and business perspective because you basically benefit from the open source marketing and the community thing from the unsuspecting users and enthusiasts pros without giving away literally anything.
Exchange your keys ahead of time, preferably offline, and just run wireguard yourself. You may need a service discovery solution depending on your networking situation.
You mean... like tailscale does? (e.g. They have devices registered with a name and you can access them. They're all given static IPs so an internal DNS server could simply resolve their names... kind of like service discovery)
Right and Tailscale is a fine product for a variety of cases but there are cases where TailScale may not be a fit for you either due to the gSuite Integration, different privacy constraints or just not wanting to trust someone else with your vpn.
Zerotier doesn't use wireguard though - which makes a difference. I have a private mesh of my family's computers on different networks and tailscale/wireguard was blazingly fast. I ended up using zerotier though, because it had an android client and availability was more important to me than speed at this point.
For me, WireGuard isn't really a viable option because I want functional mDNS name resolution.
As a test, I did set up a vxlan tunnel through a wireguard tunnel (linux to linux) to prove that it is possible to get that working. However, I can't do that on something like a mobile android client.
this example uses k3s - which is the k8s distribution by the Rancher guys. Really cool distro - simple UX. Runs equally good on a raspberry pi or the cloud
The packages installed in the builders are essentially never used since it never runs. The builder makes the files to install in the final container during the build phase, and then gets thrown away.
Well, there's road-warrior, and then there's road-warrior.
I've been trying out Glorytun, it does multi-path VPN with a relatively similar wire format to WireGuard. Being mostly indoors, due to the microbial boogaloo, I've not been trying it with the most interesting applications.
I looked into aggregating DSL connections in the past, a few years ago I think you had to get your own router for that as well as a VPS. OVH launched a service "over the box" that goes just that, they provide a router and a VPS where a VPN runs. They claimed you'd get a total bandwidth equals to the sum of all connections and I think it's not required for the connections to be similar in bandwidth.
Anyway, on the topic of scalable UDP services, does anyone have any experience of load balancing a UDP service? Because UDP is connectionless there's no obvious way to make UDP packets "sticky". Are there any established practices that could help scale this k8s Wireguard service to 2 or more containers?