In the last post, I wrote about how to convert a ’normal’ NixOS system to one that is managed by a flake. This one will build on top of, or rather scale out of managing single hosts and dive into how to do remote management and deployment for multiple systems.
Introduction
Before taking a peek into Deployments, lets set a bit of a frame of reference here first:
When talking about deployments a lot of people think about quite complex setups and the use of services such as autoscaling, instance creation etc. The setup I’m going to describe here is fairly simple in comparison.
My starting point was just a bunch of hosts, some of them physical, some virtual
(as in VM, not as in container) and all of them - at least so far - manually
managed and upgraded. Boiling it down, I had access to all of the hosts using
ssh
.
This is where I started to look into deployment tools for NixOS and due to the fact that NixOS is built on top of an amazing package manager, there are quite a bunch of them out there: Here is a list of tools on the awesome-nix page on github (lollipop seems to be missing at the moment).
Under the hood most of these tools seem to first build the system configuration somewhere, then copy over the closure of the configuration to the specific host it is intended for and finally switch over to it. Before continuing here, you might want to take a peek into how deployment on NixOS works under the hood, here are a few very interesting pointers:
There is a nice talk (+transcript) by Vaibhav Sagar from 2018’s linux.conf.au about how to do deployment with NixOS, as well as another nice blog post on how to basically deploy NixOS using bash. Another very nice post by Chris Martin goes into detail on how this can be done on AWS using bash and haskell. Last but definitely not least there is one very nice video as well as a couple of accompanying blog articles by Solène Rapenne on how she implemented deployment on NixOS using a pull instead of a push semantic.
Deploying NixOS using Colmena
When I got started with deploying on NixOS, I initially looked at some of the
deployment tools available and after reading through various bits and pieces of
documentation and blog posts I decided to give colmena
a try.
Adding all Hosts to the flake
To get started, we first need all of the hosts we want to deploy to managed inside a single flake, this is fairly straightforward and was already described in the last post. After adding all of your hosts to the flake, it should look somewhat like this:
{
description = "Oblivious Infrastructure";
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nix.url = "github:NixOS/nix/2.5.1";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.05";
nixos-hardware.url = "github:NixOS/nixos-hardware";
};
outputs = { self, nixpkgs, nix, ... }: {
nixosConfigurations = {
host0 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [ ./hosts/host0/configuration.nix ];
};
host1 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [ ./hosts/host1/configuration.nix ];
};
...
};
};
}
Setting up Host Access
In order to access and manage all systems, you need to create a keypair, that
has access to the root user on the remote machines. Since colmena
does not seem
to support the option of using a specific ssh key for a host, I opted to simply
defining everything host specific outside of colmena
and in the ssh config
instead.
At this point I’d like to mention that apparently colmena
(the same applies to
deploy-rs
) does not seem to support password protected ssh keys, if you try
using one you’ll see output like this during the deployment step:
[ERROR] Failed to complete requested operation - Last 1 lines of logs:
[ERROR] failure) Child process exited with error code: 1
[ERROR] Failed to push system closure to azwraith - Last 5 lines of logs:
[ERROR] created)
[ERROR] state) Running
[ERROR] stderr) root@1.2.3.4: Permission denied (publickey,password,keyboard-interactive).
[ERROR] stderr) error: cannot connect to 'root@colmena.host'
[ERROR] failure) Child process exited with error code: 1
[ERROR] -----
[ERROR] Operation failed with error: Child process exited with error code: 1
Also colmena
does not seem to come with a way to define which key is supposed to
be used, when accessing a node: while there are the options
deployment.targetHost
, deployment.targetPort
and deployment.targetUser
, there is
no option such as deployment.sshKey
1.
So basically you’re supposed to just have a key lying around on your system,
which gives you passwordless root level access to all of your infrastructure and
is at the same time the default method of accessing all of your nodes via ssh
from a terminal. Sounds like a great idea, after all, what could possibly go
wrong?
Luckily, there is actually a fairly easy workaround, which solves all of the little problems I just mentioned. I’m not sure how many people are using it at the moment, I haven’t really read about it anywhere else, but then again, I might just haven’t stumbled over it so far:
First (on the host you want to use to manage your infrastructure with) we need
to create a ssh key with a password, e.g. ~/.ssh/colmena.key
. using ssh-keygen
.
Next we add the password protected ssh key to the ssh-agent
, this way, unlocking
the key will be managed by the ssh-agent instead of colmena
, so whenever we need
to unlock the key, we get a GUI or CLI popup and can simply insert the password
and we even get the nice little timeframe wherein the key stays unlocked, so it
does not have to be input every single time:
$ ssh-add ~/.ssh/colmena.key
$ ssh-add -L
ssh-ed25519 ... user@host
Next create an additional colmena entry for the hosts you want to manage in
~/.ssh/config
, so in my case I now have 3 entries per host (for the
unlock.host
entry check out my earlier post, which explains remote decryption):
Host host
Hostname 1.2.3.4
User user
Host unlock.host
Hostname 1.2.3.4
User root
Port 2222
Host colmena.host
Hostname 1.2.3.4
User root
IdentityFile ~/.ssh/colmena.key
You can put the ssh config in a file in your flake repo, which you then include
by adding ‘Include /path/to/file’ at the top of your .ssh/config
file. This only
works at the top of the config, not inbetween:
# oblivious hosts:
Include /home/user/git/infrastructure/.ssh/config
Then create a small module on the managed hosts, that enables everything colmena
needs to work, e.g. modules/colmena/default.nix
:
{
# https://github.com/NixOS/nix/issues/2330#issuecomment-451650296
nix.trustedUsers = [ "root" "@wheel" ];
users.users.root.openssh.authorizedKeys.keys = [
"ssh-ed25519 ..."
"ssh-rsa ..."
];
}
Finally import the module into all of your hosts configurations and one last time update them all manually.
Getting started with Colmena
In order to use colmena, the flake.nix
has to be slightly modified, under
outputs
add a colmena
section parallell to nixosConfigurations
containing the
hosts information, like this:
...
outputs = { self, nixpkgs, nix, ... }: {
colmena = {
meta = {
nixpkgs = import nixpkgs {
system = "x86_64-linux";
};
};
host0 = {
deployment = {
targetHost = "colmena.host0"; # <- defined in ~/.ssh/config
};
imports = [ ./hosts/host0/configuration.nix ];
};
...
};
nixosConfigurations = {
...
};
};
Note the use of the colmena.host0
entry we earlier defined, so we’ll
automagically use the correct user and ssh key, when connecting. colmena
comes
with a targetPort
option, but since we are basically defining everything
specific to the host access inside of the ssh config, I’d advise against using
it here, targetPort is preferred over the ssh config, if the option is set.
(Using the option directly after adding a new host combined with the copy and
paste mistake of not adjusting the targetPort
option, can lead to very annoying
node-breaking deployments when you’re accessing hosts behind a NAT.)
Update the flake git repos on all the hosts using nixos-rebuild --flake
in order
to enable the root ssh access.
Next check out the repository containing the host(s) managed and grab a version
of colmena
, e.g. using nix shell github:zhaofengli/colmena
.
Then on the management host grab and enter a copy of the flake repo and check if
everything works by using colmena exec
2 in order to execute some commands on the remote hosts.
Depending on whether or not the ssh key is still unlocked you should either get
a pinentry popup, or everything should just work:
[user@host:~/git/infrastructure]$ colmena exec -v -- 'hostname'
[INFO ] Using flake: git+file:///home/user/git/infrastructure
[INFO ] Enumerating nodes...
copying path '/nix/store/39i5vz1nf5l6q3mxj61yv2ymfms3xhid-source' from 'https://cache.nixos.org'...
copying path '/nix/store/f1zgyzaq53q962w78gv54rxmmisfqbrk-source' from 'https://cache.nixos.org'...
[INFO ] Selected all 2 nodes.
host0 |
host1 |
host0 | host0
host0 | Succeeded
host1 | host1
host1 | Succeeded
| All done!
Voila, we just executed some commands on a couple of remote hosts in parallel!
From here on managing the hosts is fairly straightforward. To build the system
configurations you can use the build
command:
[user@host:~/git/infrastructure]$ colmena build
[INFO ] Using flake: git+file:///home/user/git/infrastructure
[INFO ] Enumerating nodes...
[INFO ] Selected all 2 nodes.
✅ 39s All done!
(...) ✅ 15s Evaluated host0 and host1
host0 ✅ 22s Built "/nix/store/y2w31fwaazjxxq94rcc28qi10xmc0rgz-nixos-system-host0-21.11pre-git
host1 ✅ 24s Built "/nix/store/v3ba77v92hbf1grr3kz649ykrlfrrglm-nixos-system-host1-21.11pre-git
Deploying to the hosts is done with the apply
command:
[user@host:~/git/infrastructure]$ colmena apply
[INFO ] Using flake: git+file:///home/user/git/infrastructure
[INFO ] Enumerating nodes...
[INFO ] Selected all 2 nodes.
✅ 28s All done!
(...) ✅ 14s Evaluated host0 and host1
host0 ✅ 0s Built "/nix/store/y2w31fwaazjxxq94rcc28qi10xmc0rgz-nixos-system-host0-21.11pre-git"
host1 ✅ 0s Built "/nix/store/v3ba77v92hbf1grr3kz649ykrlfrrglm-nixos-system-host1-21.11pre-git"
host0 ✅ 3s Pushed system closure
host1 ✅ 9s Pushed system closure
host0 ✅ 5s Activation successful
host1 ✅ 5s Activation successful
After deploying, at this point in time we have new system state on the managed
hosts. However their respective /etc/nixos
directories are still on some random
version of the repo. We can easily solve this using the exec command however. So
on the managment host we can push the repo and then use colmena to make sure the
revision on all of the hosts is always the same3:
[user@host:~/git/infrastructure]$ colmena exec -- 'cd /etc/nixos && git pull'
[INFO ] Using flake: git+file:///home/user/git/infrastructure
[INFO ] Enumerating nodes...
[INFO ] Selected all 2 nodes.
✅ 1s All done!
host0 ✅ 1s Succeeded
host1 ✅ 1s Succeeded
One noteworthy point is that when using colmena
, we seem to loose the ability to
update a system using conventional methods:
[user@host:~]$ sudo nixos-rebuild switch --flake /etc/nixos/#host
error: infinite recursion encountered
at /nix/store/n9dk853bl3ny2mqv4pjh7440jglm8wz8-source/lib/modules.nix:512:28:
511| builtins.addErrorContext (context name)
512| (args.${name} or config._module.args.${name})
| ^
513| ) (lib.functionArgs f);
(use '--show-trace' to show detailed location information)
In order to update a host locally, we need to set the allowLocalDeployment
option
option in flake.nix
(optionally it might make sense to set targetHost
to null
):
...
outputs = { self, nixpkgs, nix, ... }: {
colmena = {
meta = {
nixpkgs = import nixpkgs {
system = "x86_64-linux";
};
};
host0 = {
deployment = {
targetHost = "colmena.host0"; # <- optionally set this to 'null'
allowLocalDeployment = true;
};
imports = [ ./hosts/host0/configuration.nix ];
};
...
};
...
};
...
Then on the host, we can simply grab a copy of the flake repository, build a
system configuration using something like colmena build --on host
and finally
update the system by using the apply-local
command:
[user@host]$ colmena apply-local --sudo
[INFO ] Using flake: git+file:///home/user/git/infrastructure
host | Evaluating mortimer
host |
host | Evaluated host
host | Building host
host | /nix/store/i8vasqriqb184gdc0hnyy38yr9ii2y9w-nixos-system-host-23.05pre-git
host | Built "/nix/store/i8vasqriqb184gdc0hnyy38yr9ii2y9w-nixos-system-host-23.05pre-git"
host | Pushing system closure
host | Pushed system closure
host | No pre-activation keys to upload
host | Activating system profile
[sudo] password for user:
host | updating GRUB 2 menu...
host | activating the configuration...
host | setting up /etc...
host | reloading user units for user...
host | setting up tmpfiles
host | Activation successful
host | No post-activation keys to upload
| All done!
At this point we can basically manage, update and deploy hosts as we want.
Cleaning up the flake
Now that everything works, we can start cleaning up the flake a bit. Ever since
adding in the colmena
section, we have to reference the hosts configurations in
two places, so lets include a little let
statement and store the information we
need in hostConfigs
:
{
description = "Oblivious Infrastructure";
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nix.url = "github:NixOS/nix/2.5.1";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.05";
nixos-hardware.url = "github:NixOS/nixos-hardware";
};
outputs = {self, nixpkgs, nix, ... }:
let
hostConfigs = {
host0 = [ ./hosts/host0/configuration.nix ];
host1 = [ ./hosts/host1/configuration.nix ];
};
in
{
colmena = {
meta = {
nixpkgs = import nixpkgs {
system = "x86_64-linux";
};
};
host0 = {
deployment = {
targetHost = "colmena.host0";
};
imports = [] ++ hostConfigs.host0;
};
host1 = {
deployment = {
targetHost = "colmena.host1";
};
imports = [] ++ hostConfigs.host1;
};
};
nixosConfigurations = {
host0 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [] ++ hostConfigs.host0;
};
host1 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [] ++ hostConfigs.host1;
};
};
};
}
Adding a Development Shell into the Mix
A very nice feature, when it comes to flakes is the development shell, simply
define a shell, then type nix develop
(or even let direnv
do all the work) and
you end up with a very nice working environment.
Let’s add a development shell to our flake. In order to do that, we can add a
devShell
section to our outputs. There is a nice article about how to do this
here. I decided to keep the shell definition in a separate shell.nix
file. Add
the following to flake.nix
:
{
description = "Oblivious Infrastructure";
inputs = { ... };
outputs = {self, nixpkgs, nix, ... }:
let
pkgs = nixpkgs.legacyPackages.x86_64-linux;
hostConfigs = { ... };
in
{
devShell.x86_64-linux = import ./shell.nix {inherit pkgs;};
colmena = { ... };
nixosConfigurations = { ... };
});
}
Next create a shell.nix
file inside the flake repo:
{ pkgs ? import <nixpkgs> {} }:
with pkgs;
mkShell {
buildInputs = [ colmena ];
shellHook = ''
export PS1='\n\[\033[1;34m\][oblivious]\$\[\033[0m\] '
echo "<some ASCII art>" | base64 -d
sync () {
# local state
# Update all the Inputs:
for input in $(cat flake.nix | awk '/inputs = {/,/};/' | grep .url | cut -d '.' -f 1 | tr -d ' ')
do
nix flake lock --update-input $input
done
if [[ `git status --porcelain` ]]
then
cm="."
read -p "Commit message: " commit_message
if ! [[ "$commit_message" == "" ]]
then
cm="$commit_message"
else
cm="."
fi
git add .
git commit -m "$cm"
git push
fi
# get the channel we are following in our flake
nixpkgsVersion=$(cat flake.nix | grep nixpkgs.url | cut -d '"' -f 2 | cut -d '/' -f 3)
# remote state
echo '[INFO ] pulling repo on remote hosts'
colmena exec -v -- 'cd /etc/nixos && old_rev=$(git rev-parse --short HEAD) && git pull | grep "Already up to date" || echo "$old_rev -> $(git rev-parse --short HEAD)"'
echo '[INFO ] updating nix channel on remote hosts'
colmena exec -v -- "nix-channel --add https://nixos.org/channels/$nixpkgsVersion nixos && nix-channel --update"
# check
channelSyncSuccessful=$(echo $(
colmena exec -v -- 'echo nixos channel revision:$(cat /nix/var/nix/profiles/per-user/root/channels/nixos/.git-revision)' 2>&1 | grep 'nixos channel revision' | cut -d ':' -f 2
cat flake.lock | jq ".nodes.$(cat flake.lock | jq '.nodes.root.inputs.nixpkgs').locked.rev" | tr -d '"'
) | tr ' ' '\n' | uniq | wc -l
)
if ! [[ "$channelSyncSuccessful" == "1" ]]
then
echo "[WARNING!] - Channels on local repo and remote nodes not synced!"
echo "[WARNING!] - If you're trying to e.g. manually start nixos-containers YMMV"
# exit 1
fi
}
alias build="sync && colmena build"
alias deploy="sync && colmena apply"
'';
}
As you can see I added in a sync
function, which makes sure the /etc/nixos/
directory always reflects the most recent git commit on all nodes and also
updates the nixos-channels
on the remote nodes 4.
I also added two little aliases build
and deploy
to make life a bit more easy.
As a sidenote colmena
doesn’t seem to like it, when nodes with a set targetHost
are not reachable, a viable solution to this would be to extract the machine
addresses from e.g. the hostConfigs
section of the flake, check for their
availability and if not all hosts are reachable use colmenas --on
option to make
sure at least the nodes available can be updated without too much of a fuzz.
At this point we have a wonderful little devshell, we can jump into from basically anywhere in order to manage our systems:
[user@host:~/git/infrastructure]$ nix develop
_ _ _ _
| | | (_) (_)
___ | |__ | |___ ___ ___ _ _ ___
/ _ \| '_ \| | \ \ / / |/ _ \| | | / __|
| (_) | |_) | | |\ V /| | (_) | |_| \__ \
\___/|_.__/|_|_| \_/ |_|\___/ \__,_|___/
_ __ _ _
(_) / _| | | | |
_ _ __ | |_ _ __ __ _ ___| |_ _ __ _ _ ___| |_ _ _ _ __ ___
| | '_ \| _| '__/ _` / __| __| '__| | | |/ __| __| | | | '__/ _ \
| | | | | | | | | (_| \__ \ |_| | | |_| | (__| |_| |_| | | | __/
|_|_| |_|_| |_| \__,_|___/\__|_| \__,_|\___|\__|\__,_|_| \___|
[oblivious]$ sync
warning: updating lock file '/home/user/git/infrastructure/flake.lock':
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/fcc147b1e9358a8386b2c4368bd928e1f63a7df2' (2023-07-13)
→ 'github:NixOS/nixpkgs/fa793b06f56896b7d1909e4b69977c7bf842b2f0' (2023-07-20)
warning: Git tree '/home/user/git/infrastructure' is dirty
warning: Git tree '/home/user/git/infrastructure' is dirty
warning: Git tree '/home/user/git/infrastructure' is dirty
warning: Git tree '/home/user/git/infrastructure' is dirty
Commit message:
[master 204c4ba] .
1 file changed, 3 insertions(+), 3 deletions(-)
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 12 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 368 bytes | 368.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github...
4733c2c..204c4ba master -> master
[INFO ] pulling repo on remote hosts
[INFO ] Using flake: git+file:///home/user/git/infrastructure
[INFO ] Enumerating nodes...
[INFO ] Selected all 2 nodes.
host0 |
host1 |
host1 | From github...
host1 | 4733c2c..204c4ba master -> origin/master
host0 | From github...
host0 | 4733c2c..204c4ba master -> origin/master
host1 | 4733c2c -> 204c4ba
host1 | Succeeded
host0 | 4733c2c -> 204c4ba
host0 | Succeeded
| All done!
[INFO ] updating nix channel on remote hosts
[INFO ] Using flake: git+file:///home/user/git/infrastructure
[INFO ] Enumerating nodes...
[INFO ] Selected all 2 nodes.
host1 |
host0 |
host0 | unpacking channels...
host1 | unpacking channels...
host0 | Succeeded
host1 | Succeeded
| All done!
DIY Secret Management
Next let’s take a quick look into how to manage secrets. With colmena
a
keyComand
can be defined, so we can basically build our secret management with
whatever tooling we want. For this little example, lets just use pass
(I won’t
go into detail here, how to set it up, there are enough posts on that on the
internet).
As a quick recap, this is how our file structure currently looks like:
[user@host:~/git/infrastructure] tree -CL 2
.
├── flake.nix # our flake file containing information about all hosts and their NixOS configs
├── flake.lock # the flake lock file (autogenerated)
├── shell.nix # our development shell
├── hosts # contains hosts configuration.nix files
│ ├── host0
│ ├── host1
...
│ └── hostN
├── modules # modular configuration bits and pieces
│ ├── baseUtils
...
│ └── zfs
├── pkgs # packages that aren't part of nixpkgs
├── services # services that are hosted on the hosts
│ ├── host2
│ │ └── nextloud
│ └── host3
...
│ └── navidrome
├── users # user configurations
└── vpn # vpn configurations
First we’ll use pass
to create a secret, in this case we’ll use the following and
incredibly misterious string:
CuKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKgguKjgOKjiOKjkuKjpOKjpOKgpOKgpOKipOKjgOKjgOKggOKggOKggOKggOKggOKggOKggOKhgOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggArioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioITioKDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioqDioaDioIDioIDioZTio7/io7/io7/io7/io7/io7/io7/io6bio4TioIjioJHioJLioIDioILioIDioIDioIDioIDioIjioJLioJLioIrioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIAK4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qGA4qCA4qGF4qC54qGE4qKA4qKk4qG84qO54qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qGm4qC04qO24qCy4qCA4qCA4qCA4qKA4qGk4qGE4qCA4qCS4qCA4qCC4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCACuKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKggOKgsOKgguKgkeKgkuKgkuKgk+Kgg+KgmOKivuKjv+KjvuKhh+KggOKggOKgieKggeKgiOKgieKiu+Kjv+Khh+KgoOKghOKgpOKjgOKggOKggOKggOKgu+KiheKjgOKjiOKjkuKggOKggOKggOKggOKggOKggOKggOKgsuKhhOKggOKggOKggArioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDiobDioIPioqDioLTioKbioKbioqbio5jiorvio7/io7fio6bio6TioKDio4Tio4DioYDiorjio7/io6fioYDioIDioLjio7vioYfioIDioKDio6bioYDioInioJnioqTio4Dio4DioIDioIDioIDioIDioIDioIDioIPioIDioIDioIAK4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qGA4qKA4qCA4qCA4qCA4qCA4qCA4qGg4qCS4qGH4qCC4qKy4qCS4qCS4qO84qOu4qO/4qCJ4qCJ4qCZ4qCA4qC74qCb4qCb4qK64qGP4qCs4qGt4qO14qOA4qCI4qCB4qCA4qCA4qG/4qOf4qOG4qCA4qCA4qCA4qCA4qCj4qOE4qGA4qCA4qCi4qGA4qCA4qCA4qCA4qCACuKggOKggOKggOKggOKhgOKggOKggOKggOKggOKggOKggOKgnuKjgOKgouKggOKggOKggOKggOKikeKhpOKghOKgoOKgrOKij+KjieKjueKjv+Khl+KggOKggOKhv+KhkuKggOKggOKggOKiv+Khl+KgkuKgkuKiuuKgiuKhteKggOKggOKggOKhv+KjreKgj+KhhOKggOKggeKigOKggOKggOKgk+KipOKhgOKgiOKhhuKggOKhgOKggArioIDioIDioITioIDioIDiornioIDioIDioIDioIDiooDiobDioJvioLfioYDioIDioIDioIDiopfioJLioJLioJLioJrioILioKTioLzioqTio6fioYjioJnioK/ioL3ioLLioIDioZbioLvioIDiooDio4nio7nio4nio6fioIDioIDioIDioInio7XioobioJHioobioIDiooDioajioIbio4Dio4DioIDioInioIDioIDioIDioIAK4qCA4qCA4qCA4qCA4qCA4qCJ4qCI4qCB4qCi4qKE4qCP4qCA4qCS4qCS4qGH4qCA4qCA4qC04qGI4qCR4qOO4qOA4qOB4qOA4qG04qCK4qKB4qGf4qC34qKE4qGA4qCA4qCk4qKj4qOk4qGQ4qCS4qCS4qCm4qK84qCW4qCL4qGA4qCA4qCA4qO24qOf4qOK4qOi4qGI4qCS4qOB4qCA4qK44qO/4qG/4qCB4qCA4qCA4qCA4qCA4qCACuKggOKggOKggOKggOKggOKggOKggOKggOKjgOKjuOKggOKggOKggOKggOKgh+KggOKggOKigOKhqOKghuKgiOKgieKgkuKgmuKioOKjpOKiv+Kjh+KggOKggOKggOKggOKigOKgnuKjv+Kjv+Kjn+KjpOKjluKjiuKgoOKgnuKgg+KggOKggOKiu+KgpOKhp+KgpOKio+KjiuKggeKggeKhgOKgieKjgOKghOKggOKggOKggOKggOKggArioIDioIDioIDioIDioaDioIDioIjio4/io7nioKTio4bioaTioIDioIDioIDioIDioIDioIjiooDioZTio6vioK3iornio7/ioq/iob/io77io7/io6TioYTio4DioLTioIvioqDio7/io7/io7/io7fio6bio63io63io5bio7rio7bio6TioYTioJLioJPiorrioIjioKDio4DioqDioJ/ioIDioIvioIHioIDioJLioILioIDioIAK4qCA4qCA4qCA4qKw4qCA4qCA4qCg4qGI4qOs4qK14qCf4qOY4qCy4qC24qCA4qCA4qCA4qO04qCf4qOJ4qO04qO+4qO/4qO/4qO44qK34qK44qCB4qC54qGf4qCD4qCA4qOg4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO34qGA4qKA4qG/4qOI4qCy4qOk4qCP4qG04qCm4qGA4qGW4qCS4qCy4qCE4qCA4qCACuKggOKggOKggOKgoOKgpOKgguKgiuKggeKgiOKgieKjjuKjiOKgteKihOKggOKggOKggOKjuOKjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+KjgOKjh+Kjn+Kjk+KjtuKjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjh+KggOKioOKgkeKgh+Kgj+KitOKhseKgkuKjh+KgmOKihOKggOKggOKggOKggArioIDioILioIDio6DioILioIDioIDioIDioIDioLjioYDiorDioYDiorjioIDioIDioIDio7/io7/io7/io7/io7/io7/io7/io7/io7/io7/ioJ/io7/io5Liobrior/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/ioIbioIDioKfioIDioIDioIDioInioJHioJvioIDioqfioIDioIDioIDioIAK4qKA4qCk4qC+4qC34qOA4qOA4qCH4qG04qOG4qCA4qCz4qGA4qCA4qK54qCA4qCA4qCA4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qOb4qOA4qG/4qC24qCt4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qOm4qCA4qCA4qCA4qKA4qCA4qOf4qCv4qKm4qGA4qCI4qCT4qKk4qGA4qCACuKhjuKgk+KgkuKgguKgpOKhnuKioOKgk+KguuKhgOKggOKip+KggOKhmOKggOKggOKgsOKjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+KhvuKjjeKhr+KgreKjveKjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+KjpuKjhOKggOKgiOKjhuKgs+KirOKgkuKgmuKgkuKhhOKggOKggOKggArioJjioKLioYDioJLiopLioIHioY7iooDiobDioIPioKTioYjioaTior/ioIDioIDioIDio7/io7/io7/io7/io7/io7/io7/io7/io7/ioq3io7/io5/io5Pio7/io7/io7/io7/io7/io4/ioKnioInioInioLvio7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7fio6bio5jioIjioIDioofio6nio4nio7nioILioIDioIAK4qKA4qCQ4qKN4qCB4qCY4qKw4qKJ4qG94qKA4qGA4qKN4qKz4qC44qG34qCA4qCA4qCA4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qOb4qO/4qOS4qO+4qO/4qO/4qO/4qO/4qO/4qC34qGA4qCC4qCA4qCA4qC54qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qOm4qCI4qKJ4qKA4qOW4qCB4qCA4qGG4qCACuKgiOKgseKggOKggOKggOKgm+KgieKhtOKgm+KgmOKgouKhieKggOKggOKggOKggOKggOKjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Khv+Kjv+KgreKiveKjv+Kjv+Kjv+Kjv+Kjv+Kjv+KjreKjpOKhhOKggOKggOKiu+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+KggOKjgOKglOKii+KhpOKhhOKgkOKgggrioIDioIDioIDioIDioIDioaDioJrioIDioIDioKnioY3ioJPio4TioIDioIDioIDiorjio7/io7/io7/io7/ioL/ioL/ioJvio5vio7viob/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7bio7bio77io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/ioIDioIDiooDioafioIDioIjioqLioIAK4qCA4qCA4qKw4qCA4qCA4qKP4qCJ4qCJ4qCJ4qCA4qCJ4qCx4qG84qCA4qCA4qCA4qCw4qO/4qO/4qOv4qOk4qGO4qCt4qCk4qCc4qO/4qO/4qO/4qOT4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qG/4qC/4qKl4qCA4qCw4qO/4qOE4qOA4qCA4qCA4qCACuKggOKggOKguOKggOKggOKhgOKgieKhgOKggOKggOKhsOKgieKggOKggOKggOKggOKggOKguOKjv+Kjv+Kjv+Khg+KgkuKjkuKjveKjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Kju+Kjv+Kjv+Kjv+Kjv+Kjv+Kjv+Khh+KggOKgieKgieKgoeKghOKggOKggOKgoOKgnOKggOKggOKgh+KggOKggOKggOKggOKggArioIDioIDioIDioIDiorBORVZFUiBHT05OQSBHSVZFIFlPVSBVUOKjv+Kjv+Kjv+Kjv+Kjv05FVkVSIEdPTk5BIExFVCBZT1UgRE9XTuKggOKggOKggOKggOKggArioIDioIDioIDioIDioLjioIDioIjiorPioIDioJTioInioIDio7jioIDioIDioIDioIDioIDioJnioqLioIDioIzio73io7/io7/iob/ioL/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/io7/ioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIDioIAK4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCK4qCA4qCA4qCE4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qOA4qCM4qCA4qKg4qK/4qO/4qO/4qGH4qCA4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qO/4qGG4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCA4qCACgo=
Add it to your password store using something along the lines of:
$ pass edit colmena-test-secret
Next create a module which deploys this secret, e.g.
./modules/colmena-test-secret/default.nix
:
{
deployment.keys."colmena-test-secret" = {
keyCommand = [ "pass" "colmena-test-secret" ];
};
}
Then add it to the configuration of on of our hosts configuration.nix
:
{ config, pkgs, ... }:
{
imports =
[
./base.nix
../../modules/colmena-test-secrets
];
}
And finally run deploy
from the development shell5. After finishing, check if the
secret has been properly put on the host, or if our little deployment tool has
let us down:
$ sudo cat /run/keys/colmena-testing-secret | base64 -d
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂⣀⣈⣒⣤⣤⠤⠤⢤⣀⣀⠀⠀⠀⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠄⠠⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⡠⠀⠀⡔⣿⣿⣿⣿⣿⣿⣿⣦⣄⠈⠑⠒⠀⠂⠀⠀⠀⠀⠈⠒⠒⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⠀⡅⠹⡄⢀⢤⡼⣹⣿⣿⣿⣿⣿⣿⣿⣿⣿⡦⠴⣶⠲⠀⠀⠀⢀⡤⡄⠀⠒⠀⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⠂⠑⠒⠒⠓⠃⠘⢾⣿⣾⡇⠀⠀⠉⠁⠈⠉⢻⣿⡇⠠⠄⠤⣀⠀⠀⠀⠻⢅⣀⣈⣒⠀⠀⠀⠀⠀⠀⠀⠲⡄⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡰⠃⢠⠴⠦⠦⢦⣘⢻⣿⣷⣦⣤⠠⣄⣀⡀⢸⣿⣧⡀⠀⠸⣻⡇⠀⠠⣦⡀⠉⠙⢤⣀⣀⠀⠀⠀⠀⠀⠀⠃⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⢀⠀⠀⠀⠀⠀⡠⠒⡇⠂⢲⠒⠒⣼⣮⣿⠉⠉⠙⠀⠻⠛⠛⢺⡏⠬⡭⣵⣀⠈⠁⠀⠀⡿⣟⣆⠀⠀⠀⠀⠣⣄⡀⠀⠢⡀⠀⠀⠀⠀
⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀⠞⣀⠢⠀⠀⠀⠀⢑⡤⠄⠠⠬⢏⣉⣹⣿⡗⠀⠀⡿⡒⠀⠀⠀⢿⡗⠒⠒⢺⠊⡵⠀⠀⠀⡿⣭⠏⡄⠀⠁⢀⠀⠀⠓⢤⡀⠈⡆⠀⡀⠀
⠀⠀⠄⠀⠀⢹⠀⠀⠀⠀⢀⡰⠛⠷⡀⠀⠀⠀⢗⠒⠒⠒⠚⠂⠤⠼⢤⣧⡈⠙⠯⠽⠲⠀⡖⠻⠀⢀⣉⣹⣉⣧⠀⠀⠀⠉⣵⢆⠑⢆⠀⢀⡨⠆⣀⣀⠀⠉⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠉⠈⠁⠢⢄⠏⠀⠒⠒⡇⠀⠀⠴⡈⠑⣎⣀⣁⣀⡴⠊⢁⡟⠷⢄⡀⠀⠤⢣⣤⡐⠒⠒⠦⢼⠖⠋⡀⠀⠀⣶⣟⣊⣢⡈⠒⣁⠀⢸⣿⡿⠁⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⣀⣸⠀⠀⠀⠀⠇⠀⠀⢀⡨⠆⠈⠉⠒⠚⢠⣤⢿⣇⠀⠀⠀⠀⢀⠞⣿⣿⣟⣤⣖⣊⠠⠞⠃⠀⠀⢻⠤⡧⠤⢣⣊⠁⠁⡀⠉⣀⠄⠀⠀⠀⠀⠀
⠀⠀⠀⠀⡠⠀⠈⣏⣹⠤⣆⡤⠀⠀⠀⠀⠀⠈⢀⡔⣫⠭⢹⣿⢯⡿⣾⣿⣤⡄⣀⠴⠋⢠⣿⣿⣿⣷⣦⣭⣭⣖⣺⣶⣤⡄⠒⠓⢺⠈⠠⣀⢠⠟⠀⠋⠁⠀⠒⠂⠀⠀
⠀⠀⠀⢰⠀⠀⠠⡈⣬⢵⠟⣘⠲⠶⠀⠀⠀⣴⠟⣉⣴⣾⣿⣿⣸⢷⢸⠁⠹⡟⠃⠀⣠⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⡀⢀⡿⣈⠲⣤⠏⡴⠦⡀⡖⠒⠲⠄⠀⠀
⠀⠀⠀⠠⠤⠂⠊⠁⠈⠉⣎⣈⠵⢄⠀⠀⠀⣸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣀⣇⣟⣓⣶⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣇⠀⢠⠑⠇⠏⢴⡱⠒⣇⠘⢄⠀⠀⠀⠀
⠀⠂⠀⣠⠂⠀⠀⠀⠀⠸⡀⢰⡀⢸⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⣿⣒⡺⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠆⠀⠧⠀⠀⠀⠉⠑⠛⠀⢧⠀⠀⠀⠀
⢀⠤⠾⠷⣀⣀⠇⡴⣆⠀⠳⡀⠀⢹⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣛⣀⡿⠶⠭⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⠀⠀⠀⢀⠀⣟⠯⢦⡀⠈⠓⢤⡀⠀
⡎⠓⠒⠂⠤⡞⢠⠓⠺⡀⠀⢧⠀⡘⠀⠀⠰⣿⣿⣿⣿⣿⣿⣿⣿⣿⡾⣍⡯⠭⣽⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⣄⠀⠈⣆⠳⢬⠒⠚⠒⡄⠀⠀⠀
⠘⠢⡀⠒⢒⠁⡎⢀⡰⠃⠤⡈⡤⢿⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⢭⣿⣟⣓⣿⣿⣿⣿⣿⣏⠩⠉⠉⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣦⣘⠈⠀⢇⣩⣉⣹⠂⠀⠀
⢀⠐⢍⠁⠘⢰⢉⡽⢀⡀⢍⢳⠸⡷⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣛⣿⣒⣾⣿⣿⣿⣿⣿⠷⡀⠂⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⠈⢉⢀⣖⠁⠀⡆⠀
⠈⠱⠀⠀⠀⠛⠉⡴⠛⠘⠢⡉⠀⠀⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⣿⠭⢽⣿⣿⣿⣿⣿⣿⣭⣤⡄⠀⠀⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⣀⠔⢋⡤⡄⠐⠂
⠀⠀⠀⠀⠀⡠⠚⠀⠀⠩⡍⠓⣄⠀⠀⠀⢸⣿⣿⣿⣿⠿⠿⠛⣛⣻⡿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⣶⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⢀⡧⠀⠈⢢⠀
⠀⠀⢰⠀⠀⢏⠉⠉⠉⠀⠉⠱⡼⠀⠀⠀⠰⣿⣿⣯⣤⡎⠭⠤⠜⣿⣿⣿⣓⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠿⢥⠀⠰⣿⣄⣀⠀⠀⠀
⠀⠀⠸⠀⠀⡀⠉⡀⠀⠀⡰⠉⠀⠀⠀⠀⠀⠸⣿⣿⣿⡃⠒⣒⣽⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣻⣿⣿⣿⣿⣿⣿⡇⠀⠉⠉⠡⠄⠀⠀⠠⠜⠀⠀⠇⠀⠀⠀⠀⠀
⠀⠀⠀⠀⢰NEVER GONNA GIVE YOU UP⣿⣿⣿⣿⣿NEVER GONNA LET YOU DOWN⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠸⠀⠈⢳⠀⠔⠉⠀⣸⠀⠀⠀⠀⠀⠙⢢⠀⠌⣽⣿⣿⡿⠿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠊⠀⠀⠄⠀⠀⠀⠀⠀⠀⠀⣀⠌⠀⢠⢿⣿⣿⡇⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
I’d say that looks about right.
Colmena also supports other options than keyCommand
, such as string
or
keyFile
, but i kind of like the idea of not having all my secrets lying around
unencrypted. Also there are a bunch of other options on how to configure secrets
documented here.
Setting some Boundaries
While my relationship with colmena
can so far be described as incredibly time
saving and quite a bit of fun, there were a couple of things I ran into that
should be considered. When going down the path of automated deployment you can
actually run into a bit of trouble in paradise: the more time you spend with
colmena
, the more it will over time - and quite sneakily - steal away your
storage space.
There are two main issues I ran into kind of frequently with colmena
, that I
wasn’t used to when (not regularly) updating my hosts manually:
- firstly, the size of the nix store simply explodes
- secondly, the amount of boot configurations gets so big, the
/boot
partition runs out of space
The size of the nix store can be easily constrained by creating a module for the
nix garbage collector (./modules/nix-settings/default.nix
):
{ config, lib, pkgs, ... }:
{
nix = {
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
extraOptions = ''
min-free = ${toString (100 * 1024 * 1024)}
max-free = ${toString (1024 * 1024 * 1024)}
'';
autoOptimiseStore = true; # <- this option will hardlink identical files
};
}
Also we can limit the size of the systemd journal, by creating
./modules/limit-journal-size/default.nix
:
{ config, lib, pkgs, ... }:
{
services.journald.extraConfig = ''
SystemMaxUse=100M
MaxFileSec=7day
'';
}
And then referencing them inside our ./modules/colmena/default.nix
:
{
imports = [
../nix-settings
../limit-journal-size
];
# https://github.com/NixOS/nix/issues/2330#issuecomment-451650296
nix.settings.trusted-users = [ "root" "@wheel" ];
users.users.root.openssh.authorizedKeys.keys = [
"ssh-ed25519 ..."
"ssh-rsa ..."
];
}
The second problem can be solved by creating a module, which constrains the max
amount of available boot configurations. Create
./modules/bootConfigurationLimit/grub/default.nix
:
{ config, pkgs, ... }:
{
boot.loader.grub.configurationLimit = 16;
}
Respectively ./modules/bootConfigurationLimit/systemdboot/default.nix
:
{ config, pkgs, ... }:
{
boot.loader.systemd-boot.configurationLimit = 16;
}
And then import whichever applies to your hosts in their
./hosts/<hostname>/default.nix
files.
After rebuilding and deploying you should be somewhat save from colmenas
nature
of taking up your personal space.
At this point in time you should be able to manage and rollout your infrastructure fairly easily from a single flake. From here on setting up new services over multiple hosts gets to be way more fun.
There is maybe one last thing I should mention: sometimes colmena
will fail to
deploy properly on some of your hosts. This actually happens rather frequently,
however most of the times this happens, a simple rerun of the deployment step
will sucessfully activate the new configuration and sometimes, depending on what
you do, it helps turning things off and on again.
there is an option called
deployment.keys
, but that is for secret management ↩︎Note, using the
-v
flag returns the output of executed commands, whereas not using it simply returns success or failure with colmena. ↩︎This is of course not strictly necessary, however I feel kind of comfortable to know there is always a description of what is running on a node, which you can use to look up what is going on ↩︎
When playing around with containers I noticed they apparently followed the channels, so it was possible to deploy a 22.05 container, whereas a container created on the remote host would still be on 21.11. Not sure how much of the
sync
function still applies, but it works for me (TM) ↩︎When using colmena, I’ve found, that inputing the password during an
apply
operation does not really seem to work nicely, so as a workaround I’ve created an empty entry in my password store calleddummy
, which I can use to unlock it before deploying by simply executingpass dummy
before thedeploy
command. ↩︎