After years of self-hosting on a VPS in a datacenter, I’ve decided to move my services at home. But instead of just porting services, I’m using this as an opportunity to migrate to a more flexible and robust set up.
I will deploy services on a single mini pc. Since I need to be able to experiment and learn without disrupting my services, I will need to be able to spin up Virtual Machines (VMs). Let’s explore how I deployed Proxmox Virtual Environment on a safe host for my specific needs as a homelabber, and how I automated as much of it as possible. In a follow-up post we will explore how to spin-up and configure VMs in a reproducible way on that setup.
Objectives
After realizing that my good old Raspberry Pi 4 was too slow to let me backup or restore on an encrypted disk, I bought a Minisforum UM880 Plus. At €600 it was not extremely expensive, but I don’t intend to spend more on hardware in the foreseeable future and I want to make the most of what I have right now.
I love to experiment and would like to do it safely without putting my production set-up at risk. Those are self-hosted services mostly for my personal usage, so I can afford occasional downtime, but I don’t want to have to rebuild everything if my experiments go wrong. I also don’t want to experiment by spinning up VMs at a cloud provider, because I will not know what I’m doing while learning, and cloud providers can get expensive very quickly.
One of my main objectives as I write these lines is to get up to speed with Kubernetes. I want to stay on a single-node k3s deployment while I get comfortable with operating services on a Kubernetes cluster, but I know I will want to explore deployments with several nodes, and eventually create a full blown k8s cluster based on Talos Linux.
Threat model
My server is in my living room. The most prominent threat in my model is a burglary. If my server gets stolen I will lose access to my infrastructure and my data. I also don’t want my data to leak in the wild if the burglars put their hands on the disk in my server.
I need to have disk encryption and solid backups to keep my data safe
The second biggest threat is hardware failure. All devices can fail, but I’m fairly certain this is particularly true of a €600 mini pc that was not necessarily designed to serve as a home server.
I need to have a setup that can be automatically installed and configured
I am also a team of only one, I am fallible, and I don’t have peers to review my exact set-up. To mitigate this risk I have a group of friends called the Infra Nerds Club, whom I regularly ask for advice.
I need to have a versioned set-up that can easily be rolled back
My ISP-provided router supports wireguard. Even when I’m out, I can join the local network of my server. But my server could be shut down because of a power outage or another reason. I might be at work or even on holidays when it happens, and even wireguard can’t solve this.
I need a KVM on my local network so I can send Wake on LAN packages to my server
Unsurprisingly with my objectives and hardware constraints, I need to be able to spin up VMs to play with. The only realistic option on the table for a hobby homelabber is Proxmox Virtual Environment.
It is important to also highlight that if my server gets stolen or fails, I will not be able to spin up a hypervisor on a baremetal server again, and VPS providers would likely not let me configure a bridged network like I will do below.
The hypervisor and virtual machines I will deploy are just meant to give me flexibility. I consider the hypervisor and Virtual Machines as disposable, so I will not perform backups of the VMs themselves. I will however perform backups of the data and configuration of the services running on it.
One of my goals is to be able to quickly move my infrastructure to a cloud provider if something happened to my baremetal server, and back to a new baremetal server after it’s been delivered.
Implementing it
I will deploy a Proxmox hypervisor on the physical server in my living room. On that hypervisor, I want to be able to statically declare what VMs must be spun up, how they should be configured, how the services (e.g. k3s) are deployed on those VMs, and what DNS records must be set to reach those services. There isn’t a single unified tool to do so, and I will have to rely on opentofu, cloud init and ansible.
In this post I will only focus on deploying a rock stable Proxmox hypervisor on my server, but it’s worth having a glimpse at how I will manage it.
Opentofu, and the mother project Terraform it originated from, are often described as Infra as Code (IaC). In other words, it lets you describe in a text file what VMs you want to create on your infrastructure. With cloud-init, you can add a basic configuration for your VM, such as the users credentials, ssh keys to trust, and network configuration.
A typical opentofu snippet to spin up a VM with a Debian OS pre-configured with cloud-init looks like this. We will explain how to actually use opentofu and cloud-init to spin up VMs in a further blog post.
resource "proxmox_virtual_environment_vm" "k3s-main" {
description = "Production k3s' main VM"
tags = ["production", "k3s", "debian"]
node_name = "proximighty"
file_id = proxmox_virtual_environment_download_file.debian_cloud_image.id
Opentofu runs from my laptop. It reads the .tf
files and will talk to the Proxmox host to spin up VMs and their basic configuration. It can also talk to my registrar (Cloudflare for now) to add new DNS records if I ask it to.
Having a VM pre-configured with network, users and trusted ssh keys is very useful to hook in the second configuration tool: ansible.
An ansible playbook is a text file describing the desired state of a server, often without describing how it must be achieved. For example, instead of describing “Open the file /etc/hosts
and add the line 192.168.1.200 myhost.example.com
”, you describe “the line 192.168.1.200 myhost.example.com
must be present in the file 192.168.1.200
”.
It can sound like the same, but it’s not: running the first description twice would result in the same line being added twice to the /etc/hosts
file. Running the second description would result in having the desired lined only once. A typical playbook will look like this.
- name: Set the timezone to UTC
community.general.timezone:
- name: Install bridge utils
- name: Override Debian's default network configuration
dest: /etc/network/interfaces
- name: Create a bridge interface vrm0 and give it a static IP
dest: /etc/network/interfaces.d/vrm0
- name: Ensure enp2s0 doesn't have an IP
dest: /etc/network/interfaces.d/enp2s0
- name: Add local IP to the hosts filename
ansible.builtin.lineinfile:
line: 192.168.1.200 proximighty.ergaster.org proximighty
Ansible also runs from my laptop. It reads the playbook’s .yaml
files, and uses ssh to log into the target machine and apply the playbook configuration.
Installing an encrypted Debian
Proxmox is based on Debian and can be installed in 3 different way:
- Via the official installer
- By creating an automated installer
- By installing it on top of an existing Debian install
The second option sounds very appealing, but there is a major issue: the Proxmox (automated) installer doesn’t support setting up disk encryption. The simplest way to have disk encryption on the host is to install Debian first, and to install Proxmox on top.
I could automate the Debian install using preseeding. Preseed files contain the answers to the questions asked by the Debian installer. A colleague who wrote a preseed file for Debian 8 told me he didn’t have to update it since. After writing my own preseed, I could [add it to the Debian netinst usb disk](preseed on usb disk) and Debian would be installed automatically without human intervention.
But preseed files can only customize the basic install of Debian. To install additional packages (like Proxmox) and configure my machine I need to rely on an ansible playbook.
It is also worth noting that if my server got stolen or if its hardware failed, I wouldn’t be able to replace it with a baremetal server right away. I would have to choose a cloud provider and spin up VMs that roughly correspond to the ones I had running on my Proxmox.
I shouldn’t have to perform regular reinstalls of Debian for the Proxmox host, and I need to write an ansible playbook to configure it properly anyway. I would spend a lot of time automating the Debian install, but I wouldn’t save a lot of time in doing so.
I grabbed a Debian netinstall and performed a regular install with disk encryption, with the following specificities:
- I used the full disk with LLM, and set up disk encryption.
- I didn’t let the installer fill my disk with random data because it takes a lot of time and doesn’t match my threat model.
- I did set a root password. Proxmox is very root centric, and while there are workarounds to use a non-root user, it gets tedious very fast for little extra security.
- At the package selection step, I disabled everything but
SSH Server
andstandard system utilities
. I need both to be able to ssh into my server and let ansible control my Proxmox host.
The disk is encrypted by a password. The server will prompt me for the password when it (re)starts and will not be able to boot if I don’t type the password.
My server is connected to a KVM, so I can enter the disk encryption password when the server reboots. If you don’t, you can install and configure Dropbear to do it over ssh or create a magic usb stick that LUKS will read to decrypt the disk.
Now, I need to interact with my . After installing Debian and unlocking the disk, I need to configure the ssh server to let me temporarily log in as root to copy my public key. Via my KVM, I update the /etc/ssh/sshd_config
as follows
PermitRootLogin prohibit-password
And I restart the sshd so I can log in as root
On my laptop, I copy my public key to the server with
And finally I reverse the change on my server by editing /etc/ssh/sshd_config
again so I can only log in by ssh key
PermitRootLogin prohibit-password
One restart of the sshd later, my server is safe again
Installing and configuring Proxmox
Installing Proxmox
Installing Proxmox on top of an existing Debian is a well supported and documented process. I followed these steps on my encrypted Debian until the Proxmox VE package install and… my machine didn’t boot anymore. It was very confusing at first sight, because there was no error. I was prompted for my disk encryption password, the disk was successfully unlocked, and then nothing. The system just didn’t boot, was unreachable via SSH, and didn’t display anything via the KVM.
Figuring out why installing Proxmox bricks my Debian
I was extremely surprised that installing a vanilla Proxmox on a freshly installed, pristine Debian would completely brick the system!
I initially thought that the issue was the Proxmox kernel that didn’t support disk encryption, or that didn’t support my hardware well. After a few reinstalls and rebooting between the install steps, I figured out that booting on the Proxmox kernel without Promox VE installed worked perfectly fine. Even from an encrypted disk.
So I installed Proxmox, and asked the computer to tell me what it does when it boots. To do so, I wait for the GRUB screen to appear, and pressed e to get access to the boot command editor.
I replaced the quiet
boot parameter by noquiet
, and pressed Ctrl + x to save my changes and boot with this altered command. I could see that the machine was stuck on Job networking.service/start running
.
Looking up Proxmox job networking start running
yielded good results on the Proxmox forums. In this thread and that one users say ntp is causing issues. But I didn’t have ntp, ntpsec-ntpdate or any related package installed!
After a few reinstalls and a bit of trial and error, I could figure out that my machine wouldn’t boot after installing Proxmox VE if I didn’t set up a static IP configuration for it. Configuring a static IP for the machine after a fresh reinstall fixed the issue.
Setting up a bridge network
I only have a single physical enp2s0
network interface card on my host, but I will have several guest VMs. Each VM needs to be able to use my host’s network card and make it “impersonate” its virtual card. I’m writing a more detailed post about how this works, but the gist of it is that you need to create a virtual network interface vrmb0
called a bridge. The bridge will be connected both to the host’s physical network card, and to the VMs network interface.
The physical network card no longer operates at the IP level: it merely serves as a packet sender and receiver. So I need to remove the default IP configuration on enp2s0
, and configure vrmb0
to have an IP the host will be able to use instead.
Since I installed a minimal Debian, I need to install the required tools to create bridged networks
# apt install bridge-utils
Then, let’s clean up the default network configuration in /etc/network/interfaces
to only keep the loopback interface
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
Let’s add a file in /etc/network/interfaces.d/
for enp2s0
to be brought up but not try to get an IP
And now let’s create and configure vrmb0
to have a static IP
I can finally restart the network to ensure everything is configured properly
# systemctl restart networking
That was a lot of work, and I’m not sure I will remember how to perform all of these steps if I need to rebuild a Proxmox host. Let’s use ansible to automate everything I did after installing a clean Debian!
Installing ansible
The ansible documentation lists several ways to install ansible. I didn’t find anything related to homebrew, but the package is still present and seems up to date. Since I regularly upgrade the packages installed with homebrew, I decided it was the simplest way to keep an up to date ansible on my laptop and installed with
Writing the playbook
I created a ~/Projects/infra
folder that will contain everything related to my homelab. In this directory, I created two subdirectories: one called opentofu
that we will use later to spin up VMs, and one called ansible
that will contain my playbooks.
I want to keep the ansible playbook for my infrastructure in a single place. At the root of my ansible repository, I have created two folders: inventory
and proximighty
(the name of the Proxmox host).
In the inventory folder I can list all my hosts and organize how I want. I created a production
file that contains the following
192.168.1.200 ansible_ssh_user=root
Since I don’t have a local DNS set-up and I’m not too keen on using my public domain name for my internal network, I’ll stick to the host IP. I’ve put it under the [proximighty]
group so I can easily refer to it later in ansible, and specified that ansible must ssh as root into the machine to perform operations.
I then create a proximighty
folder under ansible
where I will describe everything that must be done on a fresh Debian to get it to the desired state. I create a configure.yaml
that will be the root of my playbook.
$ cd ~/Projects/infra/ansible
│ └── production
In the configure.yaml
file I describe the rough steps. In my case, I want to do two things:
- Configure the host. That means setting the timezone to UTC, and installing the
kitty-terminfo
package so I can use kitty with my server. - Install Proxmox.
The basic structure looks like this
- name: Configure the host
- name: Set timezone to UTC
community.general.timezone:
- name: Install kitty files
I left question marks at the end of the file, because there are quite a few steps to install Proxmox, including reboots. To keep the playbook readable, I will isolate these steps into their own module. Ansible calls this module a role. Let’s go to the proximighty
folder and create a role
folder in it. Inside it we can create a proxmox
folder that will contain all the instructions to install proxmox.
$ cd ~/Projects/infra/ansible
│ └── production
The entry point of a task is a main.yaml
file nested inside a tasks
folder, so let’s create the relevant file structure
$ cd ~/Projects/infra/ansible/proximighty
Finally we can open open the main.yaml
file and start describing the steps necessary to install Proxmox! The file starts with ---
and will then contain the various steps. Let’s start by ensuring that the bridge-utils
package is present, so we can set-up a bridged network
- name: Install bridge utils
Then we will fiddle with the network files. We want to
- Override the default configuration in
/etc/network/interfaces
- Create a bridge interface
vmbr0
described by a file in/etc/network/interfaces.d/vmbr0
- Configure
enp2s0
with a file in/etc/network/interfaces.d/enp2s0
so it doesn’t try to get its own IP - Add a local IP into
/etc/hosts
- Restart the network
Let’s describe that in ansible terms
- name: Remove Debian's default network configuration
dest: /etc/network/interfaces
- name: Create a bridge interface vrm0 and give it a static IP
dest: /etc/network/interfaces.d/vrm0
- name: Ensure enp2s0 doesn't have an IP
dest: /etc/network/interfaces.d/enp2s0
- name: Add local IP to the hosts filename
ansible.builtin.lineinfile:
line: 192.168.1.200 proximighty.ergaster.org proximighty
- name: Restart the networking service
ansible.builtin.systemd_service:
We’re asking ansible to copy files over to the server, but we didn’t tell it where to take the source files. By default, ansible looks up file in a files
folder at the root of the role. Let’s create the relevant files then:
$ cd ~/Projects/infra/ansible/proximighty
│ ├── interfaces
The content of the files is the same as in the previous section. Now, to install Proxmox we need to add the Proxmox apt repositories to our apt list. For apt to trust it, we need to add the Proxmox signing key. We need the gpg
package to be able to manipulate it. So let’s add those steps in our proxmox/tasks/main.yaml
file
- name: Ensure gpg is installed
- name: Add the Proxmox key
url: https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg
keyring: /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
- name: Add pve-no-subscription repository
ansible.builtin.apt_repository:
repo: "deb [arch=amd64] http://download.proxmox.com/debian/pve bookworm pve-no-subscription"
filename: pve-no-subscription
Finally we can update all the packages, ensure no package related to ntp is present, and reboot. Let’s add those instructions to proxmox/tasks/main.yaml
- name: Update all packages
- name: Ensure ntp and related packages are absent
- name: Reboot after upgrading packages
ansible.builtin.meta: flush_handlers
You probably noticed the notify: Reboot
that appears twice. It could look like the machine is going to reboot twice, but this is not the case. It means each step will notify the Reboot
handler, but handlers are only called at the end of a task… unless they are flushed before the end of the task. We explicitly flush the handlers with ansible.builtin.meta: flush_handlers
, so the reboot will only happen here.
We called a handler, but we didn’t define it anywhere. Like for files, ansible has a default place to look up for handlers: the handlers
directory at the root of the module. Let’s create the relevant files.
$ cd ~/Projects/infra/ansible/proximighty
│ ├── interfaces
│ ├── storage.cfg
│ └── main.yaml
And let’s add the Reboot
handler in there
We can now finalize the install by
- Installing the Proxmox kernel and rebooting
- Installing Proxmox VE and dependancies
- Removing the Debian kernel and os-prober
- Removing the pve-enterprise repository that Proxmox automatically installed
- Rebooting one last time
Let’s append those steps to the proxmox/tasks/main.yaml
file
- name: Install Proxmox VE Kernel
name: "proxmox-default-kernel"
- name: Reboot after installing Proxmox VE Kernel
ansible.builtin.meta: flush_handlers
- name: Install Proxmox VE and dependencies
- name: Remove the Debian kernel and os-prober
- name: Remove pve-enterprise repository
ansible.builtin.apt_repository:
repo: deb https://enterprise.proxmox.com/debian/pve {{ debian_version }} pve-enterprise
- name: Reboot after installing Proxmox VE and removing old kernels
ansible.builtin.meta: flush_handlers
You might notice the extra Update GRUB
handler, that we need to also add to our handlers
ansible.builtin.command: update-grub
Finally, we can wrap it all together by calling this proxmox role from our main configure.yaml
file
- name: Configure the host
- name: Set timezone to UTC
community.general.timezone:
- name: Install kitty files
ansible.builtin.import_role:
It’s now time to execute that playbook!
Executing the playbook
After writing this playbook, it’s now time to execute it! To be able to execute this playbook, we need to be able to ssh as root on the Debian host that will get Proxmox installed, with a ssh key and not a password.
As a quick test, running ssh [email protected]
should log me in without prompting me for a password or a fingerprint verification.
From my laptop, I go to the ansible
directory, from which I can run a command to invoke the configure.yaml
playbook with the production
inventory like so
$ cd ~/Projects/infra/ansible
$ ansible-playbook -i inventory/production proximighty/configure.yaml
Ansible will install everything and occasionally reboot the server when needed. Since my server has an encrypted disk, I need to monitor what’s happening on my KVM and unlock the disk with my encryption passphrase when prompted to.
I now have an ansible playbook I can use to quickly spin up a new Proxmox host on a fresh Debian with an encrypted disk! This is a solid foundation for a flexible homelab. I will be able to spin up a long-lived VM for my main k3s node. I will be able to spin up additional k3s workers if need be, or an entirely different cluster to play with, all while keeping my production reasonably isolated and stable.
We’ll see in another blog post how to use opentofu, cloud-init and ansible to spin up new VMs on that Proxmox host!
Massive thanks to my colleagues and friends Half-Shot, Davide, and Ark for their insights!