My Homelab

At the moment I have 5 servers running where 3 of them make my main PVE cluster, one Single Board Computer that is monitoring all of my essential services as well as physical hosts, and my edge router which runs OPNsense, although I plan to migrate to VyOS as it is based on Debian and has better drivers for mellanox NICs.

My beloved Rack skåp

From top to bottom I have:

  • Three ISR4331 and two 3750-X for CCNA and CCNP
  • A 48-port 3750-X (hidden under ISR4331) and a N9K-C93180LC-EX that is my TOR switch.
  • An R420 for testing
  • An R430 that runs my OPNsense firewall. It handles all of my WG and S2S VPNs
  • Then I have a R630, a R720 and a R740 with a MD1200 connected. Together they form my PVE cluster
  • Under these I have a R720 that previously ran TrueNAS but is now decommissioned.
  • Lastly I have a 4U case with a PC inside and a R620 that is looking for a new home.

I also have a NEC OPS SBC that is monitoring all of my networks core functionalities. It monitors my hosts, different temperatures using my custom API, my sitches and routers via SNMP and more.

It is on a independent power rail with a custom UPS using 18650 cells. It has redundant network links; a wireless one and a wired one so in case I my home network goes down I can still get notifications using a separate 4G Access Point that additionally runs on its own custom UPS that uses 18650 cells.

OPS NEC

My Top-of-The-Rack switch is/will be a Nexus N9K-C93180LC-EX. I’ve choose this particular one because it has multiple 40Gb/s ports and even four 100Gb/s ports.

It is also my “route” reflector for my MP-BGP. It connects to all my routers via OSPF so that I have full network coverage/connectivity.

I’m in the process of testing VTEPs on Proxmox as normally it doesn’t see virtual bridges with VNI interfaces connected to them, thus these virtual bridges can’t be assigned to VMs.

There is also a somewhat new addition in form of a Dell Optiplex 5050 that has been repurposed as a switch supporting VXLAN MP-BGP EVPN. I did this because I did not had any other hardware supporting this control plane+data plane combo and other solutions like Nexus switches are too expensive for a student to buy every 2 months.


The software side of my homelab is, in my opinion, even more interesting.

I have one production kubernetes cluster consisting of 3 servers and 6 agents for HA, one staging cluster (1 server/3 agents) for testing new thing, Gateway API at the moment to be precise, and an offsite production cluster (1 server/3 agents). The latter one is used for serving my other domain.


Another mention-worthy software element is distributed MariaDB using Galera with HaProxy as the load balancer. LB’s are also highly available as there are 3 replicas running KeepAliveD with a script checking if HaProxy is healthy.

Additionally I also use Patroni to have highly available PostgreSQL database for Matrix-synapse, Gatus and a few other services.


I also have Proxmox Backup Server running and pushing/pulling the backups to a remote PBS offsite.

Other services worth mentioning that I use on the daily basis that are hosted on Virtual Machines instead of containers are Immich (Google Images alternative), Mailcow (own mail server), Active Directory (management of family users and printers), Seafile for my and my family (OneDrive alternative) and Wazuh (Windows vulnerability monitoring)


On my production “MU3S” kubernetes cluster I have stateless Unbound deployment as my authoritive DNS server scaled to >=5, Vaultwarden (password manager), Bookstack (documentation), paperless-ngx (document organizer), Navidrome (own Spotify), custom-written API server for handling events from sensors around my house via WiFi, servers and SNMP. I also have more services running but the abovementioned ones are the most important for me.

My second production “FUMOS” cluster is very similar when it comes to running services to my first production cluster. The biggest difference is that I often use this cluster as a semi-testbed when my workloads work on my staging cluster.

Lastly my staging “T300” cluster. I use it to test new service-breaking changes and also new services. As of now I am experimenting with Gateway API instead of IngressController.