>Well, hosting a blog from home is probably not a great idea from a practical perspective.
I'm not a hater, but maybe he perceives it as not practical here because of all the fun unnecessary complexity. Keeping something like this going for more than a couple years would require complex sysadmin maintainence for updates (which is fun till it isn't).
But if you just install nginx from your system repositories, have your hugo generated .html and media files in your www dir, and forward a port 80 on your router to your server LAN IP:80, it's going to work until your distro stops working without input and without security issues. For most people, for most of the time, it works great. And it's okay if it doesn't work some of the time.
Hosting your blog from home, in whatever room, is a great idea and very practical.
(Author here) I mostly worry about security for this. If you have nothing private on your network it's probably fine, but if you have, say, a NAS that isn't using proper authentication (pretty common), an os/nginx vulnerability could end up exposing stuff.
Of course there are much simpler ways to lock things down also :)
Obviously your server would be on a DMZ vlan, probably on its own. Set it to automatically take security updates every night and aside from some zero days I'm not sure what security issues you'd have.
Then why are you intentionally adding another dozen attack surfaces and bleeding edge stuff constantly full of exploits? nginx remote exploits that matter are a once in a decade thing. Your setup is incomparably more insecure than nginx and a port forward.
I've been running a static webserver from my home for more than 20 years now. By avoiding dynamic languages, databases, and buzzwords, I've never been hacked. Never had any issue.
Thank you for a great article! I recently took the plunge of building-and-hosting a blog too - but, due to security concerns, I took the entirely opposite approach of making it fully cloud-based (Git repos for infra and for content -> AWS CodePipeline, Hugo during CodeBuild -> S3 and CloudFront). This was sadly ironic since I'd mostly wanted to blog about my experiences with homelabbing, but I didn't trust myself to open a port to the outside world. Thanks to your blog I might finally learn Kubernetes and use a Cloudflare tunnel to implement a similar truly-selfhosted blog!
I've done something similar to the author but with only ufw and port forwarding.
My closet server is set up with a cron job that runs daily and updates my domain's dns on Cloudflare to my currently allocated dynamic ip.
U
Port forwarding sends the 80/443 requests to my closet server.
Closet server only accepts 80/443 requests from Cloudflare's published ip addresses via ufw rules so that all traffic must pass through Cloudflare to be accepted.
Nginx on closet server routes it to the appropriate internal port for that service.
Maybe someone has broken into my home network, but I hope this solution works relatively well!
I would say you don't really need Kubernetes for this sort of setup (I already was running all the K8s stuff which is why I went with it, but docker compose or even just running things in systemd without containers would work too).
I think the main thing is to have some sort of network isolation (like a separate VLAN or a server that blocks outbound traffic) between stuff that's exposed to the internet and stuff that's private on the network.
I have one small VPS with access to wireguard network, wireguard rule to forward certain traffic to a virtual machine running on my desktop, fairly easy to setup tbh (and I add/remove devices constantly). I am not a networking person, my understanding of iptables is shaky but I also ran a similar setup with Nginx. Could also use TailScale, but I found the wireguard CLI very easy. Straightforward to add more networks and isolate stuff from each other (tbh, I only run one network that doesn't isolate my web-facing stuff from other stuff I run privately...as I said, I am not a networking guy so have no idea how bad of an idea this is given that the only way in is traffic on certain ports being forwarded).
Huh - I'm using Wireguard as my VPN into my home network (the only port that I have opened to the outside world), but I didn't know that you could also use it to route incoming requests to a certain VM. There's always something else to learn! Thank you :)
Ah, I see - I misread and got the impression that `cloudflared` could only connect to Kubernetes pods, but I see from reading the docs[1] that it can connect to traditional apps-on-ports as well. I'll have a poke around - thanks again!
I wouldn't host anything I cared about at home because residential broadband is unreliable as heck with low caps on egress I'm already hitting half of an average month thanks to screen sharing and video conferencing working from home. I don't want Spectrum throttling me and suddenly I can't work any more because someone found my blog and put it on Hacker News.
If all you're ever doing is hosting a static blog, then yeah, you don't need something like Kubernetes; however, if you want to host other services (e.g., database, authentication, comments, etc) then it quickly behooves you to build on some higher-level platform or else you'll end up building your own (poorly) and missing out on all of the experience, documentation, and tooling that are publicly available for Kubernetes, Docker, etc.
Sorry, but we did all of that 20 years ago, and those approaches remain a lot simpler for basic use cases. You know, like serving 100k dynamic pages/hour database backed site. (That's what I was doing 20 years ago, how about you?)
With, of course, a giant exception for two cases. The first is, as with the author, if the purpose of using Kubernetes is to learn how to use Kubernetes. The second is if you are aware of the tradeoffs and have specific reason to say that Kubernetes really is appropriate for your circumstance.
And if you think that Kubernetes is always the right approach, then you clearly are NOT aware of the tradeoffs.
> Sorry, but we did all of that 20 years ago, and those approaches remain a lot simpler for basic use cases.
Yeah, we did, and they were complicated then and they're complicated now. The difference is some of us have decades of experience which make them feel simpler.
> And if you think that Kubernetes is always the right approach, then you clearly are NOT aware of the tradeoffs.
I was very explicit that Kubernetes isn't always the right approach. Allow me to quote myself: "If all you're ever doing is hosting a static blog, then yeah, you don't need something like Kubernetes". My point is that things like Kubernetes and Docker Compose are increasingly viable defaults for nontrivial cases. In other words, if you know you're going to have a bunch of services to manage, it's a lot easier for most people (i.e., those lacking decades of experience) to manage them with the aforementioned tools rather than trying to build an equivalent "platform" from scratch.
Kubernetes wraps several layers of abstraction around the old way of doing things. There is no world in which the operation of the site is in any way clarified by involving Kubernetes. Kubernetes also adds performance overhead. There are things which are easier with Kubernetes. But only after you've climbed a long learning curve.
From what I've seen, even people without "decades of experiment" find the non-abstracted system easier to understand and reason about than the Kubernetes version. And the difference in ease is dramatic.
Examples where it makes sense include:
1. You need to deploy multiple independent systems with similar configurations. (Even then, consider ansible.)
2. You have to deploy different clusters of connected systems with related, but different, components.
3. You need to scale up and down what needs to be deployed. (Except don't try to use autoscale. That only works in marketing blurbs.)
4. Somebody else has set it up and you never need to actually understand it. (Good luck if you need to debug.)
But there is no reason to introduce Kubernetes because you need a database, a few webservers, front end proxy, failover, etc. And if you are using Kubernetes for that, odds are that you'll save yourself a world of headaches (and potential security holes!) by migrating away. No matter how much the "chief architect" may claim otherwise. I've seen how this plays out in practice.
I think we have starkly different experiences then. I’ve been part of several orgs that started out doing it “the old way”, and anything to do with servers was so difficult that it was relegated to a team of experts. Then we introduced Kubernetes and while there was a learning curve, the development teams were able to own significantly more of the infrastructure with the infrastructure experts owning management of the core and a few convenience operators. Stories like this are echoed all over—that’s virtually the DevOps narrative—container orchestration enables development teams to own their own infrastructure with a dedicated team maintaining a common platform.
Moreover, for single-server use cases, Docker Compose eschews a lot of the complexity that Kubernetes brings for HA purposes while also masking over much of the complexity of managing bare servers.
The Kubernetes (and other container orchestrators) learning curve is substantial, but it masks a lot of incidental complexity that operators would otherwise need to understand to do it themselves.
The fact that you're talking about organizations large enough to have multiple development teams which serve their stuff separately puts you squarely in my case #2.
A 50 person company with <10 developers has very different problems.
No, but the operation of Kubernetes is clarified by actually running Kubernetes. Hacking.. or “homelab”-ing is doing it for the pleasure or the edification.
Running Wordpress on a single server is relatively painless. Document the setup and configure automatic backups of the relevant configs, www folder, and database.
I have another server ssh in every night and drop everything to a folder in my dropbox account. Replicates to computers that have incremental backups and less granular off-site backups.
> ... and missing out on all of the experience, documentation, and tooling that are publicly available for Kubernetes, Docker, etc.
After working with Docker and Kubernetes for some years, i consider that a huge win. Heck, even a cgi-bin script hacked together in perl is better documented and more reliable!
Running a server to do database, authentication, comments, etc. is relatively simple. If you're trying to do scaling or whatever, sure. But to do all of that on a single vanilla server is pretty trivial, especially if you're using a CMS or some other software on it.
I'm not a hater, but maybe he perceives it as not practical here because of all the fun unnecessary complexity. Keeping something like this going for more than a couple years would require complex sysadmin maintainence for updates (which is fun till it isn't).
But if you just install nginx from your system repositories, have your hugo generated .html and media files in your www dir, and forward a port 80 on your router to your server LAN IP:80, it's going to work until your distro stops working without input and without security issues. For most people, for most of the time, it works great. And it's okay if it doesn't work some of the time.
Hosting your blog from home, in whatever room, is a great idea and very practical.