I recently switched from TrueNAS to Synology for my NAS. TrueNAS had served me well, but I no longer had the time to manage it effectively.
On this occasion, I decided to overhaul my entire home lab, which had gotten pretty messy over the years. As part of this overhaul, I will be discarding my old TrueNAS device due to its high power consumption and bulkiness. I will keep a NUC and another NUC alternative with slightly lower specs, but with 2+ LAN ports.
With this configuration, my plan is to use Proxmox on the NUC as the primary system and use the second NUC as a backup. The backup NUC however ha a dedicated connection via multiple LAN ports directly to the Synology NAS, so that would be ideal to storage intensive tasks.
My primary use case will be running containers and a few VMs for services like Git, Pi hole, backup services and more. Although my Synology NAS supports running containers and VMs, I prefer to keep things separate. I’ve already taken care of my infrastructure needs and won’t be hosting pfSense or similar services.
Since I haven’t looked into best practices lately, I’m very interested in learning new technologies like Ansible for automation.
I’m especially interested in understanding how to automate installs and updates while working with containers and VMs. I am considering whether to stay with Proxmox or go for a simpler distribution like Debian, Fedora or others.
Thanks for you insights!
Ansible is definitely worth exploring. As usual, there are a million tutorials on YouTube that will walk you through a quick deployment and some basic playbooks, but if you want a more fulsome understanding and deeper overview, I’d look at Techworld with Nana’s Ansible course. She’s extremely thorough, and she has a bunch of other videos that might give you some ideas on how to clean up your lab, especially since you’re starting fresh.
Just wanted to add that you can get Jeff Geerlings book “Ansible for DevOps” for free right now:
Thanks! How would you say the learning curve for Ansible is? I want to dedicate a week only to that so I got a little bit of time at the moment. My goal is to automate everything as much as possible and make the whole system low maintainance. I don’t have the time anymore to fix stuff or maintain the whole thing multiple times a month.
I know, selfhosting always comes with the risk of everything crashing down, but that risk I’m taking and trying to mitigate it as much as possible with best practices etc.
Little clusters of nucs has become a really common way to run small Kubernetes clusters at home. I recently rebuilt mine (still using a bulky, power hungry box like you’re tossing) and have been very happy with it. Everything is really stable, containers that misbehave are automatically destroyed and replaced, and updates are breeze because everything lives in code/git.
What would be a benefit to run k8s at home, apart from bit dealing with it, compared to docker-compose on a single or two nodes? or docker swarm? Unless there is a big load of services that are selfhosted, which I get, and the autohealing from k8s as the orchestrator.
Just courious, not taking a swing. Thanks!
K8s really shines when you start hosting more stuff, even on a single node. I definitely recommend giving k3s a try. I wouldn’t recommend it for only a couple of services though.
Is it overkill? Yes, applying docker-compose manually also works. But then you still have to make your reverse proxy, your certificate and all your services work together. You can write Ansible for it, but then you end up with a lot of custom code to maintain and you still don’t get all the nice features.
For me the killer feature was flux. Your code, configs and even secrets live in git and get autodeployed and autohealed. And it has other features such as operators to fetch helm charts from other repos and apply your config to it.
Thanks for the reply, flux is pretty good, I’m using ArgoCD, but both are basically following gitops priciples.
I might give k3s a look and see how ot all work together.