This is a post about tinc
- a nifty little Mesh VPN service. It is also the
first part of a little series of posts related to tinc
and NixOS. In this first
part I’ll just write a bit about how to set up tinc
, then in the second part
I’ll take a closer look into how writing a NixOS module can managing tinc
networks easier. And finally in the third part I’ll present a rewrite of the
module with a bunch more features. For now, lets take a look at tinc
..
TINC - a low maintainance VPN
tinc
is one of my all-time favourite little tools, a nice little VPN daemon
that has a bunch of really interesting properties such as NAT traversal,
automatic use of TCP and UDP, a per node option of publishing of subnets and of
course the fact that it is a mesh VPN, so nodes can go down without breaking
everything. Also while most people apparently haven’t heard about tinc
it has
been around for ages.
In order to create a minimal tinc network you need to set up a network
interface, create a tinc.conf
file which contains information on the name of
the node, the network device to use as well as the device type (tun
or tap
).
There are a bunch of more or less optional files involved in configuring
tinc
: For example using two little scripts called tinc-up
and tinc-down
,
the aforementioned network device that tinc is using can be created or removed
on demand using whatever tools you prefer (e.g. iproute2
or ifconfig
). Next
a public-private key-pair (RSA and/or ECDSA) has to be generated. tinc
provides an easy way to do so and generates a little node configuration, which
only contains the public key(s) of the node. This config file can then filled
with a bunch of configuration options such as the nodes IP address inside the
mesh, it’s (optional) public IP address or subnets connected to a node that you
want to make available to the rest of the mesh. Sounds good? Great, let’s look
at the whole process in a bit more detail.
Setting up a TINC Mesh on Linux
Let’s set up a little example Mesh on a bunch of imaginary machines:
node0
: with a public IP address, e.g. 10.10.10.1node1
: with a private IP addressnode2
: with a private IP address and connected to the network 192.168.0.0/24, which should be made accessible tonode0
andnode1
First lets install tinc on all of our hosts using something like this:
sudo apt install tinc
Then the VPN network has to be set up on each of the nodes. tinc
allows for
more than one concurrent VPN connection, and keeps these VPN networks configured
inside /etc/tinc/<vpn-name>
, so create an example VPN network on all of the
nodes (note that I’m using node{0,1,2}
to express that this line should be
executed on all three nodes):
[user@node{0,1,2}:/etc/tinc/example]$ sudo mkdir /etc/tinc/example
Next create the configuration file for this little example VPN network inside
/etc/tinc/example/tinc.conf
:
[user@node0:/etc/tinc/example]$ cat tinc.conf
Name = node0
DeviceType = tun
[user@node1:/etc/tinc/example]$ cat tinc.conf
Name = node1
DeviceType = tun
ConnectTo = node0
[user@node2:/etc/tinc/example]$ cat tinc.conf
Name = node2
DeviceType = tun
ConnectTo = node0
Note that inside the configuration file, we set up the network device type only,
not the device name, this can be done however using the Interface
option. Also
we set the nodes up in such a way that node1
as well as node2
will initially
try to connect to node0
using the ConnectTo
option. Interestingly some of
the options seem to be optional, e.g. the NixOS tinc
module sets up the
explicit network interface name using the Interface
option, but does not seem
to use the ConnectTo
option.
Next on all of the hosts the keypairs need to be configured. In order to set up a VPN using a RSA key of length 4096, execute the following command on the nodes:
[user@node{0,1,2}:/etc/tinc/example]$ sudo tincd -n example -K4096
tinc
will then create a private-public keypair and store the private key
inside of the node configuration file in /etc/tinc/example/hosts/<node-name>
on each of the nodes. Apart from the public key these files will be empty:
[user@node0:/etc/tinc/example]$ cat hosts/node0
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
[user@node1:/etc/tinc/example]$ cat hosts/node1
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
[user@node2:/etc/tinc/example]$ cat hosts/node2
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
At this point it makes sense to think about stuff such as ports used by the VPN
and the subnet to use for the nodes. Let’s use 10.0.0.0/24 to address our nodes
and use the default tinc
port 655. Additionally on node0
let’s add the nodes
Public IP address and on node2
lets add the 192.168.0.0/24 subnet:
[user@node0:/etc/tinc/example]$ cat hosts/node0
Address = 10.10.10.1
Subnet = 10.0.0.0
Port = 655
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
[user@node1:/etc/tinc/example]$ cat hosts/node1
Subnet = 10.0.0.1
Port = 655
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
[user@node2:/etc/tinc/example]$ cat hosts/node2
Subnet = 10.0.0.1
Subnet = 192.168.0.0/24
Port = 655
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
After setting up the nodes configuration files, they need to be shared, since in
our little example only node0
has a publicly reachable IP address the easiest
thing would be to upload the config of node1
and node2
to node0 and at the
same time grab the config files of node0
. Ideally of course the configuration
files should be synchronized between all nodes, but this is not a requirement
for tinc
, the example VPN will be accessible without having the config of
node1
on node2
and vice versa as well.
At this point the setup of the network interface as well as the routes is still
missing. Remember these optional setup files I mentioned earlier? Lets create
/etc/tinc/example/tinc-{up,down}
on all of the nodes:
[user@node0:/etc/tinc/example]$ cat tinc-up
#!/bin/sh
ip link set $INTERFACE up
ip addr add 10.0.0.0/32 dev $INTERFACE
ip route add 192.168.0.0/24 dev $INTERFACE
[user@node0:/etc/tinc/example]$ cat tinc-down
#!/bin/sh
ip route del 192.168.0.0/24 dev $INTERFACE
ip addr del 10.0.0.0/32 dev $INTERFACE
ip link set $INTERFACE down
[user@node2:/etc/tinc/example]$ cat tinc-up
#!/bin/sh
ip link set $INTERFACE up
ip addr add 10.0.0.2/32 dev $INTERFACE
[user@node2:/etc/tinc/example]$ cat tinc-down
#!/bin/sh
ip addr del 10.0.0.2/32 dev $INTERFACE
ip link set $INTERFACE down
Note that since the configuration on node1
and node0
is pretty much
identical (apart from the ip address of course) I have omitted the
tinc-{up,down}
scripts from node1
.
The content of these scripts is pretty much straightforward:
tinc-up
creates a new network interface (tinc
uses the environment variableINTERFACE
to store the interfaces name), then assigns the IP address of the node and optionally (node0
andnode1
) add a route to a subnet that’s shared by one of the other nodes (in this casenode2
).tinc-down
removes the IP address and route from the nodes and takes down the interface.
Both tinc-up
and tinc-down
are really only shell scripts, which are run
before respectively after the tinc
VPN is created so writing anything custom
should be relatively straight forward.
One important part is to make sure the tinc-{up,down}
scripts are
executable:
[user@node{0,1,2}:/etc/tinc/example]$ sudo chmod +x tinc-{up,down}
At this point everything is set up and enabling and starting the example VPN is as simple as:
[user@node{0,1,2}:/etc/tinc/example]$ sudo systemctl enable tinc@example
[user@node{0,1,2}:/etc/tinc/example]$ sudo systemctl start tinc@example
After starting the example network, all nodes should be accessible via their VPN IP addresses and the 192.168.0.0/24 subnet should be accessible from all nodes as well.
Setting up a TINC Mesh on NixOS
As with a lot of other things, when using tinc
on NixOS everything works a bit
different. Instead of setting everything up by hand there is a neat little
tinc
module.
Sidenote: Using a little alias, it is possible to search for NixOS options without using the options search on the NixOS website:
$ alias nix-search-options='man configuration.nix | less -p '
$ nix-search-options services.tinc
This will drop you inside of a less
buffer looking like this, where you can
navigate using n
and N
to jump to the next respectively previous match:
...
services.tinc.networks
Defines the tinc networks which will be started. Each network invokes a
different daemon.
Type: attribute set of submodules
Default: { }
Declared by:
<nixpkgs/nixos/modules/services/networking/tinc.nix>
services.tinc.networks.<name>.package
The package to use for the tinc daemon's binary.
Type: package
Default: "pkgs.tinc_pre"
Declared by:
<nixpkgs/nixos/modules/services/networking/tinc.nix>
...
All in all there are a bunch of available options, but in order to get started
you really only need the following two (that is from the tinc
module
specifically):
services.tinc.networks
Defines the tinc networks which will be started. Each network invokes
a different daemon.
services.tinc.networks.<name>.hosts
The name of the host in the network as well as the configuration for
that host. This name should only contain alphanumerics and underscores.
services.tinc.networks.<name>.name
The name of the node which is used as an identifier when communicating
with the remote nodes in the mesh. If null then the hostname of the
system is used to derive a name (note that tinc may replace non-
alphanumeric characters in hostnames by underscores).
Apart from these configuration options we need to set up a bunch of other
things, such as the networking interface, routes or ports on the firewall. This
is similiar to what we did using the tinc-{up,down}
scripts in the previous
section using iproute2
:
The interface tinc
expects in NixOS is named tinc.<network-name>
, so in this
case tinc.example
has to be created and set up, here is how this would look
like for node0
from our little example:
...
networking.interfaces."tinc.example".ipv4 = {
addresses = [ { address = "10.0.0.0"; prefixLength = 24; } ];
routes = [ { address = "192.168.0.0"; prefixLength = 24; via = "10.0.0.2"; } ];
};
...
If the firewall is enabled on the nodes, the port used (e.g. 655) has to be opened as well:
...
networking.firewall.allowedTCPPorts = [ 655 ];
networking.firewall.allowedUDPPorts = [ 655 ];
...
Now let’s walk through how to set up tinc
using NixOS. In order to make
everything less confusing, let’s keep everything inside a nix
file, e.g.
tinc-example.nix
and import this file inside the configuration.nix
by adding
the former file to the imports
statement:
...
imports = [
...
./tinc-example.nix
...
];
...
Now in getting a tinc node up and running in NixOS is a two-step process: First write down the configuration of the node and rebuild the system.
{ config, lib, pkgs, ... }:
{
networking.firewall.allowedTCPPorts = [ 655 ];
networking.firewall.allowedUDPPorts = [ 655 ];
networking.interfaces."tinc.example".ipv4 = {
addresses = [ { address = "10.0.0.0"; prefixLength = 24; } ];
routes = [ { address = "192.168.0.0"; prefixLength = 24; via = "10.0.0.2"; } ];
};
services.tinc.networks = {
example = {
name = "node0";
hosts = {};
};
};
}
At this point nix will check if the tinc
network has already been configured,
if not, the node will be bootstrapped, and /etc/tinc/example/hosts/<node-name>
will be created containing the nodes public keys, but still missing the rest of
the configuration.
Now the node can be added to the hosts
attrset, by grabbing the content from
the newly generated file (e.g. /etc/tinc/example/hosts/node0
) and then add the
missing configuration options. From then on, the nodes config file will be
overwritten by the configuration (along with all of the other configured
hosts
). The final tinc
configuration will look like this:
{ config, lib, pkgs, ... }:
{
networking.firewall.allowedTCPPorts = [ 655 ];
networking.firewall.allowedUDPPorts = [ 655 ];
networking.interfaces."tinc.example".ipv4 = {
addresses = [ { address = "10.0.0.0"; prefixLength = 24; } ];
routes = [ { address = "192.168.0.0"; prefixLength = 24; via = "10.0.0.2"; } ];
};
services.tinc.networks = {
example = {
name = "node0";
hosts = {
node0 = ''
Address =
Subnet = 10.0.0.0
Port = 655
Ed25519PublicKey = ...
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
'';
node1 = ''
Subnet = 10.0.0.1
Port = 655
Ed25519PublicKey = ...
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
'';
node2 = ''
Subnet = 10.0.0.2
Subnet = 192.168.0.0/24
Port = 655
Ed25519PublicKey = ...
-----BEGIN RSA PUBLIC KEY-----
...
-----END RSA PUBLIC KEY-----
'';
};
};
};
}
At this point the example network can be enabled by simply doing a
nixos-rebuild switch
and the node should be connected.
In the next part we’ll take a closer look at how writing a NixOS module can help with keeping a simpler and readable NixOS configuration.