This describes how to set up a fully encrypted Proxmox VE 6 host with ZFS root and unlocking it remotely using the dropbear ssh server. Also it describes how you can do that, while keeping systemd-boot and thus also the pve tooling intact. (I’m not sure if the pve tooling still works if you replace systemd-boot with grub, which seems to be the common solution to creating this kind of setup, maybe it does.)
Update: This post has been translated into czech language and was published on abclinuxu.cz.
Overview
We are going to do the following:
Install Proxmox VE 6 on our machine
Minimally configure the Installation
Encrypt the Installation:
- Remove a Disk from the ZFS-Pool
- Encrypt the Disk with LUKS
- Add it back to the ZFS Pool
- Repeat until all disks are encrypted
Set up Dropbear and Systemd-boot to enable remote unlocking
Prerequisites
There really only is one prerequisite apart from having a machine you want to install Proxmox onto: You need a second harddrive, which we will setup in a ZFS RAID1 configuration. If you don’t want to have your root devices mirrored, you will still need a second drive that you can use as a temporary mirrored root device, otherwise you’d have to install and set up an encrypted debian and then install proxmox on top of that.
Apart from that I’ll assume that you are probably fairly familiar with how full disk encryption works on linux systems, if not you might want to read up on that before you start messing around with any hardware. Please don’t try this out on a production system, if you don’t exactly know what you’re doing.
Installing Proxmox VE 6
The only thing you have to make sure is to set up the ZFS RAID 1 during the installation. The rest should be pretty much straight-forward.
Minimal post-installation
For some odd reason PATH
in a regular shell is different from PATH
in the javascript terminal from the webinterface. You might want to take
care of that:
echo "export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" >> ~/.bashrc
Remove the subscription popup notice (source):
sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service
Set up the community repositories:
rm /etc/apt/sources.list.d/pve-enterprise.list
echo 'deb http://download.proxmox.com/debian/pve buster pve-no-subscription' > pve-community.list
Update the host:
apt update
apt upgrade
Encrypt your installation
This is partly taken over from this wonderful post. (The
GRUB_ENABLE_CRYPTODISK
option that is mentioned in the forum post does not
apply here, since the boot partition is not encrypted. If you want this level of
security, then this is probably not the right guide for you. Also from my
understanding encrypting the boot partition means that you can’t use dropbear to
unlock the system remotely since nothing has booted so far. It is a pretty nice
way to set up fully encrypted laptops though, so you should definitely look into
this if you haven’t already!)
Right after the installation the host should look similiar to this
(lsblk
):
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 465.3G 0 part
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 931.5G 0 disk
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 465.3G 0 part
The third partition of both harddrives contains our installation, the first and second are the boot and efi partitions.
zpool status
should return something like this:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3 ONLINE 0 0 0
ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3 ONLINE 0 0 0
You might want to install cryptsetup
at this point:
apt install cryptsetup
Remove the first partition from rpool
, then encrypt it, mount it to
/dev/mapper/cryptrpool1
and reattach it to rpool
:
zpool detach rpool ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3
cryptsetup luksFormat /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3
cryptsetup luksOpen /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3 cryptrpool1
zpool attach rpool ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3 cryptrpool1
Wait until the scan
line of zpool status
displays that the drive has
been resilvered successfully. You should see something similiar to this:
scan: resilvered 1022M in 0 days 00:00:04 with 0 errors on Wed Aug 21 17:27:55 2019
Now repeat this step with the other drive:
zpool detach rpool ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3
cryptsetup luksFormat /dev/disk/by-id/ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3
cryptsetup luksOpen /dev/disk/by-id/ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3 cryptrpool2
zpool attach rpool cryptrpool1 cryptrpool2
At this point lsblk
should output something like this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 465.3G 0 part
└─cryptrpool1 253:0 0 465.3G 0 crypt
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 931.5G 0 disk
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 465.3G 0 part
└─cryptrpool2 253:1 0 465.3G 0 crypt
And zpool status
should return something like this:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
cryptrpool1 ONLINE 0 0 0
cryptrpool2 ONLINE 0 0 0
Next we want to set up /etc/crypttab
, use blkid
to get the
PARTUUID
from both harddrives:
blkid -s PARTUUID -o value /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3
blkid -s PARTUUID -o value /dev/disk/by-id/ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3
Then add them to /etc/crypttab
:
root@caliban:~# cat /etc/crypttab
# <target name> <source device> <key file> <options>
cryptrpool1 PARTUUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none luks,discard,initramfs
cryptrpool2 PARTUUID=YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY none luks,discard,initramfs
Then update the initramfs and make sure it is put on the boot partition (this is where we deviate from the forum post I’ve linked above):
update-initramfs -u -k all
pve-efiboot-tool refresh
In case you’re wondering at this point, yes I’m also getting the
cryptsetup
error message on running update-initramfs
, it still works
though:
cryptsetup: ERROR: Couldn't resolve device rpool/ROOT/pve-1
cryptsetup: WARNING: Couldn't determine root device
Now you should be able to reboot and unlock the ZFS partitions by entering the passphrase.
Setting up Dropbear to remotely unlock the partition
Now to the fun part! Since we aren’t using grub
here, we have to take
a few different steps from what we usually do in this kind of setup.
Here are a few interesting links you might want to look into as well:
- This nicely explains how to use the keys Dropbear already generates on install instead of recreating them.
- The freedesktop page on systemd-boot
- This
little article on setting up
archlinux
withdropbear
does not fully apply to our Proxmox case, but it gives enough information on how we can tellsystemd-boot
to tell the kernel to start with the options we want (unlike the article states, we need to use the udev name for assigning the IP and I was getting error messages, when supplying nameserver IPs).
First install dropbear
and busybox
:
apt install dropbear busybox
In /etc/initramfs-tools/initramfs.conf
enable busybox:
root@caliban:~# cat /etc/initramfs-tools/initramfs.conf | grep ^BUSYBOX
BUSYBOX=y
Then convert the dropbear keys:
cd /etc/dropbear-initramfs/
/usr/lib/dropbear/dropbearconvert dropbear openssh dropbear_rsa_host_key id_rsa
dropbearkey -y -f dropbear_rsa_host_key | grep "^ssh-rsa " > id_rsa.pub
And add your public key to the authorized keys:
vi /etc/dropbear-initramfs/authorized_keys
Make sure dropbear
starts by toggling the NO_START
value in
/etc/default/dropbear
.
root@caliban:~# cat /etc/default/dropbear | grep ^NO_START
NO_START=0
Finally configure dropbear
to use a different Port than 22 in order to
avoid getting the MITM warning, by changing the DROPBEAR_OPTIONS
value
in /etc/dropbear-initramfs/config:
root@caliban:~# cat /etc/dropbear-initramfs/config | grep ^DROPBEAR_OPTIONS
DROPBEAR_OPTIONS="-p 12345"
You can then set up two entries in your ~/.ssh/config
:
$ cat ~/.ssh/config
Host *
ServerAliveInterval 120
Host unlock_caliban
Hostname 1.2.3.4
User root
Port 12345
Host caliban
Hostname 1.2.3.4
Port 22
At this point I noticed, that only the third partition of both of the
harddrives with the rpool were mounted. When mounting a boot partition,
I found that there were systemd-boot configuration files, but they
seemed to be autogenerated by Proxmox, whenever
pve-efiboot-tool refresh
was run. So I looked into
/usr/sbin/pve-efiboot-tool
, and followed the code until I came out in
/etc/kernel/postinst.d/zz-pve-efiboot
, which contains the code that
generates the systemd-boot configuration files:
# [...]
for kver in ${BOOT_KVERS}; do
linux_image="/boot/vmlinuz-${kver}"
initrd="/boot/initrd.img-${kver}"
if [ ! -f "${linux_image}" ]; then
warn "No linux-image ${linux_image} found - skipping"
continue
fi
if [ ! -f "${initrd}" ]; then
warn "No initrd-image ${initrd} found - skipping"
continue
fi
warn " Copying kernel and creating boot-entry for ${kver}"
KERNEL_ESP_DIR="${PMX_ESP_DIR}/${kver}"
KERNEL_LIVE_DIR="${esp}/${KERNEL_ESP_DIR}"
mkdir -p "${KERNEL_LIVE_DIR}"
cp -u --preserve=timestamps "${linux_image}" "${KERNEL_LIVE_DIR}/"
cp -u --preserve=timestamps "${initrd}" "${KERNEL_LIVE_DIR}/"
# create loader entry
cat > "${esp}/loader/entries/proxmox-${kver}.conf" <<- EOF
title ${LOADER_TITLE}
version ${kver}
options ${CMDLINE}
linux /${KERNEL_ESP_DIR}/vmlinuz-${kver}
initrd /${KERNEL_ESP_DIR}/initrd.img-${kver}
EOF
done
# [...]
For us, the cat part is especially interesting: the CMDLINE
variable
in the line beginning with “=options=” contains the boot options for the
Linux kernel. This variable is assigned in the same file:
# [...]
if [ -f /etc/kernel/cmdline ]; then
CMDLINE="$(cat /etc/kernel/cmdline)"
else
warn "No /etc/kernel/cmdline found - falling back to /proc/cmdline"
CMDLINE="$(cat /proc/cmdline)"
fi
# [...]
Apparently /etc/kernel/cmdline
is the place where Proxmox stores it’s
boot options. The file contains one single line:
root=ZFS=rpool/ROOT/pve-1 boot=zfs
After finding the /etc/kernel/cmdline
file, I did a bit of searching
and according to the Proxmox
documentation,
it is actually the apropriate file to change in this case.
Now that we have identified the file we can use to configure our kernel options, there are two things we want to add:
we want to make sure the network interface comes up so that we can ssh into the initramfs, we will use the
ip
option for that. It uses the following format (look here for further reading):ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>: <dns0-ip>:<dns1-ip>:<ntp0-ip>:
I omitted everything after autoconf, something like this works for me:
ip=1.2.3.4::1.2.3.1:255.255.255.0:caliban:enpXsY:none:
also we have to tell the kernel which devices the cryptodevices are that we want to unlock, which is done using the
cryptodevice
option (here we have to supply the PARTUUIDs for both of our harddrives):cryptdevice=UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX cryptdevice=UUID=YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY
The whole content of /etc/kernel/cmdline
looks like this:
ip=1.2.3.4::1.2.3.1:255.255.255.0:caliban:enpXsY:none: cryptdevice=UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX cryptdevice=UUID=YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY root=ZFS=rpool/ROOT/pve-1 boot=zfs
The last thing to do is to:
update-initramfs -u -k all
pve-efiboot-tool refresh
Now you should be able to reboot your machine and ssh into the busybox on the
port you just configured for dropbear
. From there you can unlock the drives by
running something like this (You’ll have to input it twice since you have two
encrypted drives):
echo -n "password" > /lib/cryptsetup/passfifo
Or:
/lib/cryptsetup/askpass "password: " > /lib/cryptsetup/passfifo
Or you can also use the cryptroot-unlock
script that is preinstalled
already, which also prompts you to enter the password twice.
If you’re lazy, you can also use put the following script into
/etc/initramfs-tools/hooks
and make it executable. I basically merged
the above example of using /lib/cryptsetup/askpass
with a version of a
unlock script I had lying around, it looks like it might have been from
this gist. It
asks you for a passphrase and then uses echo to write it into
/lib/cryptsetup/passfifo
twice (since I use 2 harddrives) with one
second delay in between, then kills the session so the system can come
up (I noticed, that /etc/motd, which contains instructions on how to
unlock your drive is not displayed in the busybox session. . You
probably shouldn’t use it, but it seems to work for me):
#!/bin/sh
PREREQ="dropbear"
prereqs() {
echo "$PREREQ"
}
case "$1" in
prereqs)
prereqs
exit 0
;;
esac
. "${CONFDIR}/initramfs.conf"
. /usr/share/initramfs-tools/hook-functions
if [ "${DROPBEAR}" != "n" ] && [ -r "/etc/crypttab" ] ; then
cat > "${DESTDIR}/bin/unlock" << EOF
#!/bin/sh
unlock_devices() {
pw="\$(/lib/cryptsetup/askpass "password: ")"
echo -n \$pw > /lib/cryptsetup/passfifo
sleep 1
echo -n \$pw > /lib/cryptsetup/passfifo
}
if unlock_devices; then
# kill \`ps | grep cryptroot | grep -v "grep" | awk '{print \$1}'\`
# following line kill the remote shell right after the passphrase has
# been entered.
kill -9 \`ps | grep "\-sh" | grep -v "grep" | awk '{print \$1}'\`
exit 0
fi
exit 1
EOF
chmod 755 "${DESTDIR}/bin/unlock"
mkdir -p "${DESTDIR}/lib/unlock"
cat > "${DESTDIR}/lib/unlock/plymouth" << EOF
#!/bin/sh
[ "\$1" == "--ping" ] && exit 1
/bin/plymouth "\$@"
EOF
chmod 755 "${DESTDIR}/lib/unlock/plymouth"
echo To unlock root-partition run "unlock" >> ${DESTDIR}/etc/motd
fi
That’s pretty much all of it, you can now start enjoying remote reboots on your freshly encrypted Proxmox host.