I wanted a way to be able to spin up Wireguard on a VPS and be able to shut it down when I didn't need it. This repo lets you do that, reimporting the server keys and client configs so you don't need to reconfigure all your clients when you bring the server back up again. This means you could cron it to shut down when you're asleep, etc.
There are potentially quite a few benefits of being able to spin up clusters on demand [1]:
* Fully reproducible cluster builds and deployments.
* The type of cluster (can be) an implementation detail, making it easy to move between e.g Minikube, Kops, EKS, etc. After all, K8s is just a runtime.
* Developers can create temporary dev environments or replicas of other clusters
* Promote code through multiple environments from local Minikube clusters to cloud environments
* Version your applications and dependent infrastructure code together
* Simplify upgrades by launching a brand new cluster, migrating traffic and tearing the old one down (blue/green)
* Test in-place upgrades by launching a replica of an existing cluster to test the upgrade before repeating it in production
* Increase agility by making it easier to rearchitect your systems - if you have a pet, modifying the overall architecture can be painful
* Frequently test your disaster recovery processes as a by-product for no extra effort (sans data)
Sugarkube [1] is an open source alternative to using Ansible for working with Kubernetes clusters. Because it's focussed squarely on Kubernetes, it runs much faster (it's written in Go), but still supports templating & hierarchical variables. It can optionally create clusters (using Minikube, Kops or EKS) as well as install and delete your applications.
When Sugarkube installs your apps, it'll create all the necessary cloud infrastructure you define. It also respects dependencies, so e.g. your Wordpress sites will only be installed once there is a database to back them. You can also choose to only install a subset of your applications (i.e. so you don't need to bother installing the monitoring stack if you just want to work on Jenkins).
Sugarkube can also tear everything down again leaving you with the same clean slate that you started with, so you can use it to spin up and tear down clusters either locally or remotely to your heart's content.
Check out the sample project [2] and accompanying tutorials [3] which demonstrate running a web cluster for a couple of Wordpress sites, and an Ops cluster containing Jenkins, monitoring and Keycloak. Sugarkube supports several advanced use cases, including creating a Kops cluster, installing Keycloak into it, then reconfiguring the Kops cluster to authenticate against the Keycloak instance running inside the cluster. That's all with one command (after you've created a workspace).
I'm happy to answer any questions since I wrote it :-)
Nice project!
Please correct me, but it looks sugarkube is mainly focused minikube or for cloud (e.g. kops/etc), right?
We chose Ansible as our primary focus is on prem, and setting up linux vms parameters, for all machines in a cluster, and Ansible sounds like a fit.
For on prem "consumable" clusters is what we also do (that's why the project accepts any size, from 1 vm cluster to thousands vms clusters).
We should look into sugarkube for AWS cluster activities - Thanks for sharing!
Sugarkube orchestrates other tools. In this case where you have to actually SSH into machines to install kubeadm then Ansible may be a better choice for actually creating the clusters (since there's probabaly no single binary that does that).
However, Sugarkube can also be used as a ready-made release pipeline for applications. In general, using Sugarkube to release applications gives you a standardised way of being able to install apps onto different clusters (local/on-prem/cloud), keeping your options open for the future. So you could use it to promote applications through various dev/testing/staging clusters to production, some of which may be on-prem, some on AWS.
If you're looking to create a common set of applications in your project (monitoring, CI/CD, ingresses, etc.), using Sugarkube would allow users to install them into clusters regardless of how they were created (either by your project via kubeadm, or by Kops/EKS with or without Sugarkube, etc), so your applications may be more generally useful to more people.
Hi I'm not sure if you saw my comment below, but this is 100% the usecase Sugarkube [1] was designed for. Depending on where you are in setting things up it might save you time to give it a try. There are some non-trivial tutorials [2] and sample projects you can use to kickstart your development. It does only currently work with EKS, Kops and Minikube though so wouldn't be suitable if you're using something else to create your K8s cluster.
A major problem I've seen using Kubernetes is it's difficult to bring up an entire cluster from scratch, play with it and tear it down. So I wrote Sugarkube [1] which lets you launch an ephemeral cluster, install all your applications into it (including creating any dependent cloud infrastructure) and tear it down again. This means each dev can have their own isolated K8s cluster that's in parity with production. And you can release the same code through different environments. In fact your prod environments can become ephemeral too - instead of doing complicated in-place upgrades you could spin up a sister cluster, direct traffic to it and then tear down the original. Or you could spin up a cluster to test the upgrade before repeating it in your live environment.
Based on my own experience I believe ephemeral clusters can solve a huge number of problems dev teams face using Kubernetes.
Sugarkube currently supports launching clusters using Minikube, EKS and Kops and we'll be adding provisioners for GKE and Azure in future. Sugarkube also works with existing clusters, so you can use it deploy your applications. It's a sane way of managing how to ship infrastrucuture code (e.g. Terraform configs) with your applications that need them.
Sugarkube also supports parameterising applications differently per environment - you could almost view it as something like Ansible but that was written with Kubernetes in mind. And it's in Go so it's way faster than Ansible (an early POC was actually written in Ansible and it was very slow).
I've just finished intro tutorials for deploying Wordpress to Minikube [2] and EKS [3]. I'd be keen to hear feedback. We just tagged our first proper release earlier this week and it's ready to try now.
I definitely agree with you - my team switched to an "ephemeral cluster" model which allows us to very quickly spin up an entirely new cluster and drain traffic to it as needed.
It's something that we've ended up implementing on our own with a lot of Terraform, but that's had its own obstacles and is something of a small maintenance burden. I'll be taking a look at sugarkube!
No, more than that. To Sugarkube, the actual type of cluster (minikube, EKS, Kops) is just an implementation detail. Actually, when you think about it, even the region you deploy in or your cloud provider are really just an implementation details. Provided your applications are portable you could create a local minikube cluster that just includes the subset of stuff from your cluster that you care about for whatever you're working on. E.g. imagine you're in an ops team, and the ops cluster runs monitoring stuff and Jenkins, you could create a local cluster just running Jenkins and its dependencies (e.g. tiller, cert-manager, etc.). This is possible because Sugarkube understands your application's dependencies [1].
But the cool thing is that you could then go to the cloud whenever you wanted to. If you have some hard dependency on some cloud infrastructure, you could just use Sugarkube to spin up an isolated dev cluster on EKS for example. The EKS dev cluster could look the same as your prod cluster in terms of versions of software installed, but just use fewer, smaller EC2s, and again, perhaps just running a subset of the applications in your overall ops cluster.
Once you've developed in isolation, you could then deploy to a staging cluster which could again have been brought up - either just for this task, or more likely at the start of the day/week. Finally you could then promote your updated version of Jenkins into your prod cluster. For major upgrades to the prod cluster you could use Sugarkube to spin up a brand new sister cluster and then start to gradually migrate prod traffic over to it. Once all the traffic is going to your new cluster, just tear down the old one. If there's a problem, back out by sending all traffic back to the old cluster. Of course this last ability depends on something at the perimeter like Akamai (I think AWS have something similar?), and is a lot easier if your state is outside the cluster (e.g. in hosted databases, S3, etc.) but it'd be doable.
On projects I've worked on I've seen so many problems that basically came down to long-lived clusters where someone set them up a year ago and left/forgot how they worked. Or because performing upgrades was a nightmare they weren't done, etc. 100% automation of ephemeral clusters just solves all those problems.
SEEKING WORK | Remote Part-time/Adhoc | Golang/Python/Java/devops
I'm seeking some adhoc/short-term Golang/python/devops (AWS/K8s) contracts over the summer while I work on other projects. Perhaps I can help with code reviews, architectural advice, as a sounding board, small scripts, etc. I've spent 4 years working as a big data engineer (ETL + analysis with Hadoop/Java, spark/scala, Google Cloud/AWS), another 4 years as a devop (AWS/Jenkins/Terraform) and several years as a full-stack web developer (python/django). I've most recently led a devops/SRE team building a K8s platform on AWS.
That's one problem sugarkube[1] aims to solve. But it goes further and allows you to spin up k8s clusters from scratch and install all your stuff onto them. It can bootstrap an AWS account first (e.g. to create S3 buckets for kops/terraform), handles templating files, and gives you a hierarchical configuration. It's still under development but should be very flexible once it's at MVP in a month or so.
The aim of all this is to allow you to avoid tricky in-place cluster upgrades and to just spin up a new cluster, direct traffic to it and tear down the old one. An extra benefit is that it would allow you to give each dev their own k8s cluster in parity with your live envs, but they could select only a subset of charts.
Check out the sample project[2] for more of an idea about how to use it (but the actual sample is probably broken since it's under heavy development at the moment).