Free Software, Free Society!
Thoughts of the FSFE Community (English)

Thursday, 28 May 2020

Send your talks for Akademy 2020 *now*

The Call for Participation is still open for two weeks more, but please make us a favour and send yours *now*.

This way we don't have to panic thinking if we are going to need to go chasing people or not, or if we're going to have too few or too many proposals.

Also if you ask the talks committee for review, we can review your talk early, give you feedback and improve it, so it's a win-win.

So head over to, find the submit link in the middle of that wall of text and click it ;)

Sunday, 24 May 2020

chmk a simple CHM viewer

Okular can view CHM files, to do so it uses KHTML, makes sense CHM is basically HTML with images all compressed into a single file.

This is somewhat problematic since KHTML is largely unmaintained and i doubt it'll get a Qt6 port.

The problem is that the only other Qt based HTML rendering engine is QtWebEngine and while great it doesn't support stuff we would need to use it in Okular, since Okular needs to access to the rendered image of the page and also to the text since it uses the same API for all formats, be it CHM, PDF, epub, wathever.

The easiest plan to move forward is probably drop CHM from Okular, but that means no more chm viewing in KDE software, which would be a bit sad.

So I thought, ok maybe I can do a quick CHM viewer just based in QtWebEngine without trying to fit it into the Okular backend to support different formats.

And ChmK was born

It's still very simple, but the basics work, if you give it a file in the command line, it'll open it and you'll be able to browse it.

As you can see it doesn't have *any* UI yet, so Merge Requests more than welcome.

Saturday, 16 May 2020

Network Namespaces - Part Three

Previously on … Network Namespaces - Part Two we provided internet access to the namespace, enabled a different DNS than our system and run a graphical application (xterm/firefox) from within.

The scope of this article is to run vpn service from this namespace. We will run a vpn-client and try to enable firewall rules inside.



My VPN choice of preference is dsvpn and you can read in the below blog post, how to setup it.

dsvpn is a TCP, point-to-point VPN, using a symmetric key.

The instructions in this article will give you an understanding how to run a different vpn service.

Find your external IP

Before running the vpn client, let’s see what is our current external IP address

ip netns exec ebal curl

The above IP is an example.

IP address and route of the namespace

ip netns exec ebal ip address show v-ebal

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip netns exec ebal ip route show

default via dev v-ebal dev v-ebal proto kernel scope link src


Open firefox (see part-two) and visit we noticed see that the location of our IP is based in Athens, Greece.

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"


Run VPN client

We have the symmetric key dsvpn.key and we know the VPN server’s IP.

ip netns exec ebal dsvpn client dsvpn.key 443

Interface: [tun0]
Trying to reconnect
Connecting to
net.ipv4.tcp_congestion_control = bbr


We can not see this tunnel vpn interface from our host machine

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 94:de:80:6a:de:0e brd ff:ff:ff:ff:ff:ff

376: v-eth0@if375: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:f7:c2:fb:60:ea brd ff:ff:ff:ff:ff:ff link-netns ebal


but it exists inside the namespace, we can see tun0 interface here

ip netns exec ebal ip link

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Find your external IP again

Checking your external internet IP from within the namespace

ip netns exec ebal curl

Firefox netns

running again firefox, we will noticed that our the location of our IP is based in Helsinki (vpn server’s location).

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"



We can wrap the dsvpn client under a systemcd service

Description=Dead Simple VPN - Client

ExecStart=ip netns exec ebal /usr/local/bin/dsvpn client /root/dsvpn.key 443


Start systemd service

systemctl start dsvpn.service


ip -n ebal a

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    inet peer scope global tun0
       valid_lft forever preferred_lft forever
    inet6 64:ff9b::c0a8:c001 peer 64:ff9b::c0a8:c0fe/96 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ee69:bdd8:3554:d81/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip -n ebal route

default via dev v-ebal dev v-ebal proto kernel scope link src dev tun0 proto kernel scope link src


We can also have different firewall policies for each namespace


# iptables -nvL | wc -l


ip netns exec ebal iptables -nvL

Chain INPUT (policy ACCEPT 9 packets, 2547 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 2 packets, 216 bytes)
 pkts bytes target     prot opt in     out     source        destination

So for the VPN service running inside the namespace, we can REJECT every network traffic, except the traffic towards our VPN server and of course the veth interface (point-to-point) to our host machine.

iptable rules

Enter the namespace


ip netns exec ebal bash


verify that iptables rules are clear

iptables -nvL

Chain INPUT (policy ACCEPT 25 packets, 7373 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 4 packets, 376 bytes)
 pkts bytes target     prot opt in     out     source        destination

Enable firewall


The content of this file

## iptable rules

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP
iptables -A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT

## netns - incoming
iptables -A INPUT -i v-ebal -s -j ACCEPT

## Reject incoming traffic
iptables -A INPUT -j REJECT

iptables -A OUTPUT -p tcp -m tcp -o v-ebal -d --dport 443 -j ACCEPT

## net-ns outgoing
iptables -A OUTPUT -o v-ebal -d -j ACCEPT

## Allow tun
iptables -A OUTPUT -o tun+ -j ACCEPT

## Reject outgoing traffic
iptables -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
iptables -A OUTPUT -p udp -j REJECT --reject-with icmp-port-unreachable


iptables -nvL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     all  --  lo     *
    0     0 ACCEPT     all  --  *      *     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *     icmptype 8 ctstate NEW
    1   349 ACCEPT     all  --  v-ebal *
    0     0 REJECT     all  --  *      *     reject-with icmp-port-unreachable
    0     0 ACCEPT     all  --  lo     *
    0     0 ACCEPT     all  --  *      *     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *     icmptype 8 ctstate NEW
    0     0 ACCEPT     all  --  v-ebal *
    0     0 REJECT     all  --  *      *     reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     tcp  --  *      v-ebal tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal
    0     0 ACCEPT     all  --  *      tun+
    0     0 REJECT     tcp  --  *      *     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *     reject-with icmp-port-unreachable
    0     0 ACCEPT     tcp  --  *      v-ebal tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal
    0     0 ACCEPT     all  --  *      tun+
    0     0 REJECT     tcp  --  *      *     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *     reject-with icmp-port-unreachable

PS: We reject tcp/udp traffic (last 2 linew), but allow icmp (ping).


End of part three.

Tuesday, 12 May 2020

How fun does my Ring Fit Adventure² look like after some time

So after a full “Ring Fit” month – i.e. 30 days played1 – let us see how this workout game held up for me.

Historic context: COVID-19 and Ring Fit Adventure

This might be less interesting for those reading now, but might be relevant information for anyone reading this in a few years.

The first part of the year 2020 has been marked by COVID-19. This disease has since spread to a pandemic, impacting the whole globe, with many countries going into a more or less full lockdown. This situation is slowly changing, as in May many countries in Europe are removing the lockdown.

As everyone who could had to stay at home, the demand for Ring Fit Adventure rose drastically. Not only is it almost impossible to get one, due to it being out of stock pretty much everywhere, there are even rumours that in China you can only get one on the black/grey market for 250 $ (i.e. more than 3x the MSRP).

30 workout days in 76 days

The main question is obviously how much did Ring Fit Adventure get me to work out.

First off, I have to admit that I did not train every work day, as I hoped, but still much more than usual.

One reason I skipped two full weeks was that I had to send the Joy-Cons for repair, as their notorious drift started to show quite badly. This prevented me to work out, as without joy-cons I could not play itt

I took another fortnightly break from it before that, due to not feeling so well.

Now let us look at some data.

According to Ring Fit Adventure itself:

  • I reached world 9 of the story3
  • my character is now at level 75
  • and my difficulty setting is at level 20
  • the total time I exercised2 is 8h 18'
  • in which I burned 2670 kcal of energy
  • and ran a total of 28 km

Looking at my own log, this is how much I trained in the past few months:

yoga gymnastics rowing Ring Fit
2019-12 3
2020-01 1 1
2020-02 2 10
2020-03 12
2020-04 7
2020-05 3 1

This table needs some comment:

  • I got my Ring Fit Adventure in early February. But as you can see, my training regime was a bit lackluster in the two months before it. It was somewhat better the previous year, but I lost that piece of paper.
  • Rowing is something I did go to last year, but have not mustered the courage to go in several months … and then COVID-19 kicked in, so I am not even allowed to go (ha! another good excuse).
  • We are only half-way through May and I just got my joy-cons back from repair yesterday, so I was not able to play Ring Fit Adventure for the whole first half of the month. And the rest of the month is still ahead of me.

In my opinion, this clearly shows that Ring Fit Adventure did trick me into training more often. Including all the hiccups it took me 76 days to work out 30 days (or 36, if you ask the game1), which is roughly every second day. As a comparison, reaching my goal of exercising every work day, I would have to play it for 55 days in the same time frame.

In sum, I am happy with the results, but will try to train more in the future, with the hope to soon be able to go work out with others (e.g. go rowing).

Adventure content so far

As mentioned, in those 30 days I reached World 93 – in fact, I could have taken on the boss today and reached World 10, but decided to instead revisit some old levels and side quests.

Following is a short summary of what Worlds 5 to 9 introduce new to the game. I have to say that so far every world brought something new and refreshing, so the gameplay never got stale. In fact, I have a small problem that I have almost too many fighting skills to choose from. Good thing that by now old fighting skills are getting levelled up, so they are again a reasonable choice to bring to battle.

What also proved as a great feature – especially after a longer break – is that when you start up Adventure mode, it summarises the story so far.

Update: Voices and languages

In the update, they also included more voice languages and the choice for Ring to have either a male or female voice.

I find that as a neat addition, and I am currently using a male French voice with English subtitles to brush up on my French while I am working out.

World 5 & 6

  • introduce days that target only specific skills/muscle groups and does so in a very nice, fun and varied way
  • introduces skill points and a skill tree
  • gyms now have also skill sets – a great and easy way to get some targeted work out in Adventure mode as well

World 7

  • the plot twist is becoming apparent and the story is becoming more varied – while nothing award-winning it is actually enjoyable
  • unlocks All Sets as a choice for setting fighting skills – i.e. choosing to go to fights with workout skills that focus on a certain muscle group, posture, cardio workout, etc.
  • new enemy that can buff defense

World 8 & 9

  • expand the skill tree, including upgrades to previous fighting skills
  • adds a new movement skill that enables to reach alternative paths (also in previous) levels, adding to replayability of past levels; and if timed right, doubles as battle evasion

Other modes

Since my last blog entry about Ring Fit Adventure, the game has received an update that introduces two new modes:

  • Rhythm game – in this mode you press and pull the RingCon and/or twist your torso to the beat of a song, while also occassionaly having to squat or stand up. It seems like a pretty fun party game, especially if they introduce more songs. The only issue I have with it is that I have trouble keeping the RingCon in place when doing the fast twists.
  • Running mode – in this mode, you simply run through a level, without any mob encounters or similar. I have not played this mode much, but for a more lazy day, it seems fitting and relaxing.

Final general thoughts

To start with the negaive, in the past weeks my RingCon did not recognise the transition from presses to pulls really well. But that was easily fixed with a simple recalibration from the options menu. I have not had those issues since.

So, after a full Ring Fit month, how do I feel about it?

While it did not make me lose much weight or gain any major muscles, my back has not ached since. Well, except maybe a little bit when I did not train for two weeks. But even then it took only one or two days of playing for it to stop aching again. I also do feel more fit in general.

In that regard I would say it is a great success.

Did it trick me into training more? Perhaps not as often as were my ambitious expectations, but it has undeniably improved my workout regime immensely.

I do think the story mode is a great way to pull me in, and apart from the fun story and overall atmosphere, I think this is in great part to the RPG elements. I am not a big fan of gamification of everything, and “adding RPG elements” is usually just an euphemism for introducing some levels, badges and collectibles to trick the player into investing more time. But in my opinion this game does it right.

That being said, I am continuing to work out using Ring Fit Adventure and will continue to bump up the difficulty to keep my workout to Moderate Workout.

hook out → very happy to be able to move and play again

  1. The game claims I have been playing for 36 days, but I suspect this includes non-adventure modes. 

  2. Ring Fit Adventure records only time actually spent exercising. For comparison, my Switch records that I played the game for 20 h. 

  3. Apparently there are 20 worlds in total (encompassing 100 levels all together) with a NG+ and NG++ as well. So with 30 full days, I am probably not even half way through. If the trend continues, the worlds are only going to get longer, not shorter. 

Network Namespaces - Part Two

Previously on… Network Namespaces - Part One we discussed how to create an isolated network namespace and use a veth interfaces to talk between the host system and the namespace.

In this article we continue our story and we will try to connect that namespace to the internet.

recap previous commands

ip netns add ebal
ip link add v-eth0 type veth peer name v-ebal
ip link set v-ebal netns ebal
ip addr add dev v-eth0
ip netns exec ebal ip addr add dev v-ebal
ip link set v-eth0 up
ip netns exec ebal ip link set v-ebal up

Access namespace

ip netns exec ebal bash

# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:07:60:da:d5:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::e007:60ff:feda:d5cf/64 scope link
       valid_lft forever preferred_lft forever

# ip r dev v-ebal proto kernel scope link src

Ping Veth

It’s not a gateway, this is a point-to-point connection.

# ping -c3

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.415 ms
64 bytes from icmp_seq=2 ttl=64 time=0.107 ms
64 bytes from icmp_seq=3 ttl=64 time=0.126 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2008ms
rtt min/avg/max/mdev = 0.107/0.216/0.415/0.140 ms


Ping internet

trying to access anything else …

ip netns exec ebal ping -c2
ip netns exec ebal ping -c2
ip netns exec ebal ping -c2
ip netns exec ebal ping -c2
root@ubuntu2004:~# ping
ping: connect: Network is unreachable

root@ubuntu2004:~# ping
ping: connect: Network is unreachable

root@ubuntu2004:~# ping
ping: Temporary failure in name resolution

root@ubuntu2004:~# exit

exit from namespace.


We need to define a gateway route from within the namespace

ip netns exec ebal ip route add default via

root@ubuntu2004:~# ip netns exec ebal ip route list
default via dev v-ebal dev v-ebal proto kernel scope link src

test connectivity - system

we can reach the host system, but we can not visit anything else

# ip netns exec ebal ping -c1
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.075 ms

--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms

# ip netns exec ebal ping -c3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from icmp_seq=3 ttl=64 time=0.126 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.026/0.093/0.128/0.047 ms

# ip netns exec ebal ping -c3
PING ( 56(84) bytes of data.

--- ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2044ms

root@ubuntu2004:~# ip netns exec ebal ping -c3
ping: Temporary failure in name resolution



What is the issue here ?

We added a default route to the network namespace. Traffic will start from our v-ebal (veth interface inside the namespace), goes to the v-eth0 (veth interface to our system) and then … then nothing.

The eth0 receive the network packages but does not know what to do with them. We need to create a forward rule to our host, so the eth0 network interface will know to forward traffic from the namespace to the next hop.

echo 1 > /proc/sys/net/ipv4/ip_forward


sysctl -w net.ipv4.ip_forward=1

permanent forward

If we need to permanent tell the eth0 to always forward traffic, then we need to edit /etc/sysctl.conf and add below line:

net.ipv4.ip_forward = 1

To enable this option without reboot our system

sysctl -p /etc/sysctl.conf


root@ubuntu2004:~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1


but if we test again, we will notice that nothing happened. Actually something indeed happened but not what we expected. At this moment, eth0 knows how to forward network packages to the next hope (perhaps next hope is the router or internet gateway) but next hop will get a package from an unknown network.

Remember that our internal network, is with a point-to-point connection to So there is no way for network to know how to talk to

We have to Masquerade all packages that come from and the easiest way to do this if via iptables.
Using the postrouting nat table. That means the outgoing traffic with source will have a mask, will pretend to be from (eth0) before going to the next hop (gateway).

# iptables --table nat --flush
iptables --table nat --append POSTROUTING --source --jump MASQUERADE
iptables --table nat --list
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

target     prot opt source               destination
MASQUERADE  all  --           anywhere

Test connectivity

test again the namespace connectivity

# ip netns exec ebal ping -c2
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from icmp_seq=2 ttl=64 time=0.139 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1017ms
rtt min/avg/max/mdev = 0.054/0.096/0.139/0.042 ms

# ip netns exec ebal ping -c2
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.242 ms
64 bytes from icmp_seq=2 ttl=63 time=0.636 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.242/0.439/0.636/0.197 ms

# ip netns exec ebal ping -c2
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=51 time=57.8 ms
64 bytes from icmp_seq=2 ttl=51 time=58.0 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 57.805/57.896/57.988/0.091 ms

# ip netns exec ebal ping -c2
ping: Temporary failure in name resolution





If you carefully noticed above, ping on the IP works.
But no with name resolution.

netns - resolv

Reading ip-netns manual

# man ip-netns | tbl | grep resolv

  resolv.conf for a network namespace used to isolate your vpn you would name it /etc/netns/myvpn/resolv.conf.

we can create a resolver configuration file on this location:

mkdir -pv /etc/netns/ebal/
echo nameserver > /etc/netns/ebal/resolv.conf

I am using radicalDNS for this namespace.

Verify DNS

# ip netns exec ebal cat /etc/resolv.conf

Connect to the namespace

ip netns exec ebal bash

root@ubuntu2004:~# cat /etc/resolv.conf

root@ubuntu2004:~# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=50 time=64.3 ms
64 bytes from ( icmp_seq=2 ttl=50 time=64.2 ms
64 bytes from ( icmp_seq=3 ttl=50 time=66.9 ms
64 bytes from ( icmp_seq=4 ttl=50 time=63.8 ms
64 bytes from ( icmp_seq=5 ttl=50 time=63.3 ms

--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 63.344/64.502/66.908/1.246 ms

root@ubuntu2004:~# ping -c3
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=51 time=57.4 ms
64 bytes from ( icmp_seq=2 ttl=51 time=55.4 ms
64 bytes from ( icmp_seq=3 ttl=51 time=55.2 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 55.209/55.984/57.380/0.988 ms

bonus - run firefox from within namespace


start with something simple first, like xterm

ip netns exec ebal xterm


ip netns exec ebal xterm -fa liberation -fs 11


test firefox

trying to run firefox within this namespace, will produce an error

# ip netns exec ebal firefox
Running Firefox as root in a regular user's session is not supported.  ($XAUTHORITY is /home/ebal/.Xauthority which is owned by ebal.)

and xauth info will inform us, that the current Xauthority file is owned by our local user.

# xauth info
Authority file:       /home/ebal/.Xauthority
File new:             no
File locked:          no
Number of entries:    4
Changes honored:      yes
Changes made:         no
Current input:        (argv):1

okay, get inside this namespace

ip netns exec ebal bash

and provide a new authority file for firefox

XAUTHORITY=/root/.Xauthority firefox

# XAUTHORITY=/root/.Xauthority firefox

No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0.0


xhost provide access control to the Xorg graphical server.
By default should look like this:

$ xhost
access control enabled, only authorized clients can connect

We can also disable access control

xhost +

but what we need to do, is to disable access control on local

xhost +local:


and if we do all that

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"


End of part two

Saturday, 09 May 2020

Network Namespaces - Part One

Have you ever wondered how containers work on the network level? How they isolate resources and network access? Linux namespaces is the magic behind all these and in this blog post, I will try to explain how to setup your own private, isolated network stack on your linux box.

notes based on ubuntu:20.04, root access.

current setup

Our current setup is similar to this


List ethernet cards

ip address list

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

List routing table

ip route list

default via dev eth0 proto static dev eth0 proto kernel scope link src


Checking internet access and dns

ping -c 5

PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=54 time=121 ms
64 bytes from ( icmp_seq=2 ttl=54 time=124 ms
64 bytes from ( icmp_seq=3 ttl=54 time=182 ms
64 bytes from ( icmp_seq=4 ttl=54 time=162 ms
64 bytes from ( icmp_seq=5 ttl=54 time=168 ms

--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 121.065/151.405/181.760/24.299 ms

linux network namespace management

In this article we will use the below programs:

so, let us start working with network namespaces.


To view the network namespaces, we can type:

ip netns
ip netns list

This will return nothing, an empty list.


So quicly view the help of ip-netns

# ip netns help

Usage:  ip netns list
  ip netns add NAME
  ip netns attach NAME PID
  ip netns set NAME NETNSID
  ip [-all] netns delete [NAME]
  ip netns identify [PID]
  ip netns pids NAME
  ip [-all] netns exec [NAME] cmd ...
  ip netns monitor
  ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]


To monitor in real time any changes, we can open a new terminal and type:

ip netns monitor

Add a new namespace

ip netns add ebal

List namespaces

ip netns list

root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list

We have one namespace

Delete Namespace

ip netns del ebal

Full example

root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns del ebal
root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list


root@ubuntu2004:~# ip netns monitor
add ebal
delete ebal


When we create a new network namespace, it creates an object under /var/run/netns/.

root@ubuntu2004:~# ls -l /var/run/netns/
total 0
-r--r--r-- 1 root root 0 May  9 16:44 ebal


We can run commands inside a namespace.


ip netns exec ebal ip a

root@ubuntu2004:~# ip netns exec ebal ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00


we can also open a shell inside the namespace and run commands throught the shell.

root@ubuntu2004:~# ip netns exec ebal bash

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

root@ubuntu2004:~# exit


as you can see, the namespace is isolated from our system. It has only one local interface and nothing else.

We can bring up the loopback interface up

root@ubuntu2004:~# ip link set lo up

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

root@ubuntu2004:~# ip r


The veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices.

Think Veth as a physical cable that connects two different computers. Every veth is the end of this cable.

So we need 2 virtual interfaces to connect our system and the new namespace.

ip link add v-eth0 type veth peer name v-ebal



root@ubuntu2004:~# ip link add v-eth0 type veth peer name v-ebal

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

3: v-ebal@v-eth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff

4: v-eth0@v-ebal: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff

Attach veth0 to namespace

Now we are going to move one virtual interface (one end of the cable) to the new network namespace

ip link set v-ebal netns ebal


we will see that the interface is not showing on our system

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside the namespace

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Connect the two virtual interfaces


ip addr add dev v-eth0

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal
    inet scope global v-eth0
       valid_lft forever preferred_lft forever


ip netns exec ebal ip addr add dev v-ebal

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global v-ebal
       valid_lft forever preferred_lft forever

Both Interfaces are down!

But both interfaces are down, now we need to set up both interfaces:


ip link set v-eth0 up

root@ubuntu2004:~# ip link set v-eth0 up 

root@ubuntu2004:~# ip link show v-eth0
4: v-eth0@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal


ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link show v-ebal
3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

did it worked?

Let’s first see our routing table:


root@ubuntu2004:~# ip r
default via dev eth0 proto static dev v-eth0 proto kernel scope link src dev eth0 proto kernel scope link src


root@ubuntu2004:~# ip netns exec ebal ip r dev v-ebal proto kernel scope link src

Ping !


root@ubuntu2004:~# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from icmp_seq=4 ttl=64 time=0.042 ms
64 bytes from icmp_seq=5 ttl=64 time=0.071 ms

--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4098ms
rtt min/avg/max/mdev = 0.028/0.047/0.071/0.014 ms


root@ubuntu2004:~# ip netns exec ebal bash
root@ubuntu2004:~# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from icmp_seq=3 ttl=64 time=0.041 ms
64 bytes from icmp_seq=4 ttl=64 time=0.044 ms
64 bytes from icmp_seq=5 ttl=64 time=0.053 ms

--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4088ms
rtt min/avg/max/mdev = 0.041/0.045/0.053/0.004 ms
root@ubuntu2004:~# exit

It worked !!


End of part one.

Thursday, 07 May 2020

HamBSD Development Log 2020-05-07

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on writing unit tests for aprsisd.

I’ve added a few unit tests to test the generation of the TNC2 format packets from AX.25 packets to upload to APRS-IS. There’s still some todo entries there as I’ve not made up packets for all the cases I wanted to check yet.

These are the first unit tests I’ve written for HamBSD and it’s definitely a different experience compared to writing Python unit tests for example. The framework for the tests uses The tests are C programs that include functions from aprsisd.

In order to do this I’ve had to split the function that converts AX.25 packets to TNC2 packets out into a seperate file. This is the sort of thing that I’d be more comfortable doing if I had more unit test coverage. It seemed to go OK and hopefully the coverage will improve as I get more used to writing tests in this way.

I also corrected a bug from yesterday where AX.25 3rd-party packets would have their length over-reported, leaking stack memory to RF.

I’ve been reading up on the station capabilities packet and it seems a lot of fields have been added by various software over time. Successful APRS IGate Operation (WB2OSZ, 2017) has a list of some of the fields and where they came from under “IGate Status Beacon” which seems to be what this packet is used for, not necessarily advertising the capabilities of the whole station.

If this packet were to become quite long, there is the possibility for an amplification attack. Someone with a low power transmitter can send an IGate query, and then get the IGate to respond with a much longer packet at higher power. It’s not even clear in the reply why this packet would be being sent as the requestor is not mentioned.

I think this will be the first place where I implement some rate limiting and see how that works. Collecting some simple statistics like the number of packets uploaded/downloaded would also be useful for monitoring.

Next steps:

  • Keep track of number of packets uploaded and downloaded
  • Add those statistics to station capabilities packet

Wednesday, 06 May 2020

cloudflared as a doh client with libredns

Cloudflare has released an Argo Tunnel client named: cloudflared. It’s also a DNS over HTTPS (DoH) client and in this blog post, I will describe how to use cloudflared with LibreDNS, a public encrypted DNS service that people can use to maintain the secrecy of their DNS traffic, but also circumvent censorship.

Notes based on ubuntu 20.04, as root


Download and install latest stable version

curl -sLO

tar xf cloudflared-stable-linux-amd64.tgz

ls -l
total 61160
-rwxr-xr-x 1 root root 43782944 May  6 03:45 cloudflared
-rw-r--r-- 1 root root 18839814 May  6 19:42 cloudflared-stable-linux-amd64.tgz

mv cloudflared /usr/local/bin/

check version

# cloudflared --version
cloudflared version 2020.5.0 (built 2020-05-06-0335 UTC)

doh support

# cloudflared proxy-dns --help
   cloudflared proxy-dns - Run a DNS over HTTPS proxy server.

   cloudflared proxy-dns [command options]

LibreDNS Endpoints

LibreDNS has two endpoints:

  • dns-query
  • ads

The latest blocks trackers/ads etc.


We can use cloudflared as standalone for testing, here is on a non standard TCP port:

cloudflared proxy-dns --upstream --port 5454
INFO[0000] Adding DNS upstream                   url=""
INFO[0000] Starting DNS over HTTPS proxy server  addr="dns://localhost:5454"
INFO[0000] Starting metrics server               addr=""

Testing ads endpoint

$ dig @ -p 5454 +short
$ dig @ -p 5454 +short
$ dig @ -p 5454 +short


We have verified that cloudflared works with libredns, so let us create a configuration file.

By default, cloudflared is trying to find one of the below files (replace root with your user):

  • /root/.cloudflared/config.yaml
  • /root/.cloudflared/config.yml
  • /root/.cloudflare-warp/config.yaml
  • /root/cloudflare-warp/config.yaml
  • /root/.cloudflare-warp/config.yml
  • /root/cloudflare-warp/config.yml
  • /usr/local/etc/cloudflared/config.yml

The most promising file is:

  • /usr/local/etc/cloudflared/config.yml

Create the configuration file

mkdir -pv /usr/local/etc/cloudflared
cat > /usr/local/etc/cloudflared/config.yml << EOF
proxy-dns: true

or for ads endpoint

mkdir -pv /usr/local/etc/cloudflared
cat > /usr/local/etc/cloudflared/config.yml << EOF
proxy-dns: true


# cloudflared

INFO[0000] Version 2020.5.0
INFO[0000] GOOS: linux, GOVersion: go1.12.7, GoArch: amd64
INFO[0000] Flags                                         proxy-dns=true proxy-dns-upstream=""
INFO[0000] Adding DNS upstream                           url=""
INFO[0000] Starting DNS over HTTPS proxy server          addr="dns://localhost:53"
INFO[0000] Starting metrics server                       addr=""
INFO[0000] cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service:
$ dig +short


if you are a use of Argo Tunnel and you have a cloudflare account, then you login and get your cert.pem key. Then (and only then) you can install cloudflared as a service by:

 cloudflared service install

and you can use /etc/cloudflared or /usr/local/etc/cloudflared/ and must have two files:

  • cert.pem and
  • config.yml (the above file)

That’s it !

HamBSD Development Log 2020-05-06

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on gating from the Internet to RF.

In the morning I cleaned up the mess I made yesterday with escaping the non-printable characters in packets before uploading them. I ran some test packets through and both Xastir and could decode them so that must be the correct way to do it.

I also added filtering of generic station queries (?APRS?) and IGate queries (?IGATE?). When an IGate query is seen, aprsisd will now respond with a station capabilities packet. The packet is not very exciting as it only contains the “IGATE” capability right now, but at least it does that.

Third-party packets are also identified, and have their RF headers stripped, and then are unconditionally thrown away. I need to add a check there to see if TCPIP is in the path and gate it if it’s not but I didn’t get to that today.

I added a new flag to aprsisd, -b, to allow the operator to indicate whether or not this station is a bi-directional IGate station. This currently only affects the q construct added to the path for uploaded packets (there wasn’t one before today) to indicate to APRS-IS whether or not this station can be used for two-way messaging with RF stations in range. Later I’ll make this also drop incoming messages if it’s set to receive-only instead of attempting to gate them anyway.

I noticed that when I connect to aprsc servers with TLS, they have actually been appending a qAS (message generated by server) instead of qAR (message is from RF) which I think is a bug, so I filed a GitHub issue.

The big thing today was generating third-party headers. Until now, aprsisd has tried to stuff the TNC2 header back into an AX.25 packet with a lot of truncated path entries. It’s now building the headers correctly(ish) and it’s possible to have bi-directional messaging through a HamBSD IGate. The path in the encapsulated header is currently entirely missing but it still works.

As this is a completely different way of handling these packets, it meant a rewrite of a good chunk of code. The skeleton is there now, just need to fill it in.

APRS message from MM0ROR: Hello from APRS-IS!

Next steps:

  • Generate proper paths for 3rd-party packets
  • Include a path for RF headers for station capabilities and 3rd-party packets
  • Add the new -b flag to the man page

Tuesday, 05 May 2020

HamBSD Development Log 2020-05-05

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on converting AX.25 packets to the TNC2 format used by APRS-IS.

I fixed the path formatting to include the asterisks for used path entries. Before packets would always appear to APRS-IS to have been heard directly, which gave some impressive range statistics for packets that had in fact been through one or two digipeaters.

A little more filtering is now implemented for packets. The control field and PID are verified to ensure the packets are APRS packets.

The entire path for AX.25 packet read from axtap(4) interface to TNC2 formatted string going out the TCP/TLS connection has bounds checks, with almost all string functions replaced with the mem* equivalents.

It wasn’t clear if it’s necessary to escape the non-printable characters in packets before sending to APRS-IS, and it turns out that actually you’re not meant to do that. I’d implemented this with the following (based roughly on how the KISS output escaping working in kiss(4):

icp = AX25_INFO_PTR(pkt_ax25, pi);
iep = pkt_ax25 + ax25_len;
while (icp < iep) {
        ibp = icp;
        while (icp < iep) {
                if (!isprint(*icp++)) {
        if (icp > ibp) {
                if (tp + (icp - ibp) > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                memcpy(&pkt_tnc2[tp], ibp, icp - ibp);
                tp += icp - ibp;
        if (icp < iep) {
                if (tp + 6 > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                pkt_tnc2[tp++] = '<';
                pkt_tnc2[tp++] = '0';
                pkt_tnc2[tp++] = 'x';
                pkt_tnc2[tp++] = hex_chars[(*icp >> 4) & 0xf];
                pkt_tnc2[tp++] = hex_chars[*icp & 0xf];
                pkt_tnc2[tp++] = '>';

I can now probably replace this with just a single bounds check and memcpy, but then I need to worry about logging. There is a debug log for every packet that I’ll probably just call strvis(3).

This did throw up something interesting though, so maybe this wasn’t a complete waste of time. I noticed that a “<0x0d>” was getting appended to packets coming out of my Yaesu VX-8DE. It turns out that this wasn’t a bug in my code or in aprsc (the APRS-IS server software I was connected to) but it’s actually a real byte that is tagged on the end of every APRS packet generated by the radio’s firmware. I never saw it before because aprsc would interpret this byte (ASCII carriage return) as the end of a packet, it would just be lost.

Next steps:

  • Removing the non-printable character escaping again
  • Filtering generic APRS queries (to avoid packet floods)
  • Filtering 3rd-party packets

Sunday, 03 May 2020

kwallet-pam >= 5.18.4 and ecryptfs homes

If you are using kwallet-pam to unlock your kwallet wallets *and* have a encryptfs home, the automatic unlocking probably stopped working for you with Plasma 5.18.4 and you are getting a "Wallet failed to get opened by PAM, error code is -9" in the system log.


kwallet-pam uses something called a salt file.

Before Plasma 5.18.4 the salt file was read (or created if not existing) in the "authenticate" step of pam. Now that happens on the "open_session" step of pam.

This is very important because on the open_session the encrypted home is already mounted while in the authenticate step it is not.

What that means is that before Plasma 5.18.4 there was a /home/youruser/.local/share/kwalletd/kdewallet.salt *outside* your encrypted home (that was created/read on the "authenticate" step).

Now with Plasma >= 5.18.4 the /home/youruser/.local/share/kwalletd/kdewallet.salt is created/read correctly inside your encrypted home like the rest of your files.

This is all nice for new users, but if you have an existing user, the kwallet auto unlocking will stop to work.


Because your wallet was salted with the file that was outside your encrypted home folder, now since kwallet-pam can no longer read that, it fails.

How to fix it?

* Reboot your system
* Login as root (or as a different user)
* See that there is a /home/youruser/.local/share/kwalletd/kdewallet.salt (FILE_A)
* Copy that file somewhere safe
* Now login as the youruser user
* If you have a /home/youruser/.local/share/kwalletd/kdewallet.salt copy it somewhere else safe too (you shouldn't need it but just in case)
* Copy the FILE_A you stashed somewhere safe to /home/youruser/.local/share/kwalletd/kdewallet.salt
* Reboot your system and now everything should work

Hetzner Dedicated Server Reverse DNS + Ansible

Continuing on the path towards all my stuff being managed by Ansible, I’ve figured out a method of managing the reverse DNS entries for subnets on the Hetzner Dedicated Server.

There’s a bunch of Ansible modules for handling Hetzner Cloud, but these servers are managed in Robot which the Cloud API doesn’t cover. Instead, you need to use the Robot Webservice.

Ansible does have a module for doing pretty arbitrary things with web APIs though, so using that I’ve got the following playbook figured out to keep the reverse DNS entries in sync:

- hosts:
  - vmhost_vm1
  gather_facts: False
  - name: import hetzner webservice credentials
      file: "hetzner_ws.yml"
  - name: set rdns for hetzner hosts
    - name: get current rdns entry
        user: "{{ hetzner_ws_user }}"
        password: "{{ hetzner_ws_password }}"
        url: "{{ vmip4 }}"
        status_code: [200, 404]
      register: rdns_result
    - name: update incorrect rdns entry
        user: "{{ hetzner_ws_user }}"
        password: "{{ hetzner_ws_password }}"
        url: "{{ vmip4 }}"
        method: "POST"
        body_format: form-urlencoded
          ptr: "{{ inventory_hostname }}"
        status_code: [200, 201]
      when: '"rdns" not in rdns_result.json or inventory_hostname != rdns_result.json.rdns.ptr'
      changed_when: True
    delegate_to: localhost

The host groups this runs on are currently hardcoded as the VMs that live in the Hetzner Dedicated Server. A future iteration of this might use some inventory plugin to look up the subnets that are managed on Hetzner and create a group for all of those. Right now it won’t be setting the reverse DNS for the “router” interface on that subnet, and won’t automatically include new server’s subnets.

Gathering facts is disabled because all of these run locally. There is at least one VM running on this server that I can’t log in to because I host it for someone else, so running locally is a necessity.

The webservice credentials are stored in an Ansible Vault encrypted YAML file and loaded explicitly. An important note: the webservice username and password is not the same as your regular Robot username and password. You need to create a webservice user in Robot under “Settings; Webservice and app settings”.

If you attempt to authenticate with an incorrect username and password 3 times in a row, your IP address will be blocked for 10 minutes. There are 6 hosts in this group, so I did this a few times before I realised there was a different user account I needed to create. I’d suggest limiting to a single host while you’re testing to get the authentication figured out.

The actual calls to the webservice take place in a block just to avoid having to specify the delegate_to: localhost twice. The first step looks up the current PTR record, and accepts success if it gives either a 200 or 404 status code (it will be 404 if there is no existing pointer record). The result of this is used to conditionally create/update the PTR record in the next task.

If nothing needs to be done, nothing will be changed and the second task will be skipped. If a change is needed, the second step is successful if either the PTR record is updated (status 200) or created (status 201).

This was actually a lot easier than I thought it would be, and the uri module looks to be really flexible, so I bet there are other things that I could easily integrate into my Ansible playbooks.

Monday, 27 April 2020

Run your CI test with GitLab-Runner on your system

GitLab is a truly wonderful devops platform. It has a complete CI/CD toolchain, it’s opensource (GitLab Community Edition) and it can also be self-hosted. One of its greatest feature are the GitLab Runner that are used in the CI/CD pipelines.

The GitLab Runner is also an opensource project written in Go and handles CI jobs of a pipeline. GitLab Runner implements Executors to run the continuous integration builds for different scenarios and the most used of them is the docker executor, although nowadays most of sysadmins are migrating to kubernetes executors.

I have a few personal projects in GitLab under but I would like to run GitLab Runner local on my system for testing purposes. GitLab Runner has to register to a GitLab instance, but I do not want to install the entire GitLab project. I want to use the docker executor and run my CI tests local.

Here are my notes on how to run GitLab Runner with the docker executor. No root access needed as long as your user is in the docker group. To give a sense of what this blog post is, the below image will act as reference.


GitLab Runner

The docker executor comes in two flavors:

  • alpine
  • ubuntu

In this blog post, I will use the ubuntu flavor.

Get the latest ubuntu docker image

docker pull gitlab/gitlab-runner:ubuntu


$ docker run --rm -ti gitlab/gitlab-runner:ubuntu --version
Version:      12.10.1
Git revision: ce065b93
Git branch:   12-10-stable
GO version:   go1.13.8
Built:        2020-04-22T21:29:52+0000
OS/Arch:      linux/amd64

exec help

We are going to use the exec command to spin up the docker executor. With exec we will not need to register with a token.

$ docker run --rm -ti gitlab/gitlab-runner:ubuntu exec --help

Runtime platform arch=amd64 os=linux pid=6 revision=ce065b93 version=12.10.1
   gitlab-runner exec - execute a build locally

   gitlab-runner exec command [command options] [arguments...]

     shell       use shell executor
     ssh         use ssh executor
     virtualbox  use virtualbox executor
     custom      use custom executor
     docker      use docker executor
5 minutes ago
# Run your CI test with GitLab-Runner on your system
GitLab     parallels   use parallels executor

   --help, -h  show help

Git Repo - tmux

Now we need to download the git repo, we would like to test. Inside the repo, it must have the .gitlab-ci.yml file. The gitlab-ci file describes the CI pipeline, with all the stages and jobs. In this blog post, I will use a simple repo that builds the latest version of tmux for centos6 & centos7.

git clone
cd tmux

Docker In Docker

The docker executor will spawn the GitLab Runner. GitLab Runner needs to communicate with our local docker service to spawn the CentOS docker image and to run the CI job.

So we need to pass the docker socket from our local docker service to GitLab Runner docker container.

To test dind (docker-in-docker) we can try one of the below commands:

docker run --rm -ti
  -v /var/run/docker.sock:/var/run/docker.sock
  docker:latest sh


docker run --rm -ti
  -v /var/run/docker.sock:/var/run/docker.sock
  ubuntu:20.04 bash


There are some limitations of gitlab-runner exec.

We can not run stages and we can not download artifacts.

  • stages no
  • artifacts no


So we have to adapt. As we can not run stages, we will tell gitlab-runner exec to run one specific job.
In the tmux repo, the build-centos-6 is the build job for centos6 and the build-centos-7 for centos7.


GitLab Runner will use the /builds as the build directory. We need to mount this directory as read-write to a local directory to get the artifact.

mkdir -pv artifacts/

The docker executor has many docker options, there are options to setup a different cache directory. To see all the docker options type:

$ docker run --rm -ti gitlab/gitlab-runner:ubuntu exec docker --help | grep docker

Bash Script

We can put everything from above to a bash script. The bash script will mount our current git project directory to the gitlab-runner, then with the help of dind it will spin up the centos docker container, passing our code and gitlab-ci file, run the CI job and then save the artifacts under /builds.


# This will be the directory to save our artifacts
mkdir -p artifacts

# JOB="build-centos-7"


docker run --rm                              \
  -v "$PWD":"$PWD"                           \
  --workdir "$PWD"                           \
  gitlab/gitlab-runner:ubuntu                \
  exec docker                                \
    --docker-volumes="$PWD/artifacts":/builds:rw \

That’s it.

You can try with your own gitlab repos, but dont forget to edit the gitlab-ci file accordingly, if needed.

Full example output

Last, but not least, here is the entire walkthrough

ebal@myhomepc:tmux(master)$ git remote -v
oring (fetch)
oring (push)
$ ./

Runtime platform           arch=amd64 os=linux pid=6 revision=ce065b93 version=12.10.1
Running with gitlab-runner 12.10.1 (ce065b93)
Preparing the "docker" executor
Using Docker executor with image centos:6 ...
Pulling docker image centos:6 ...
Using docker image sha256:d0957ffdf8a2ea8c8925903862b65a1b6850dbb019f88d45e927d3d5a3fa0c31 for centos:6 ...
Preparing environment
Running on runner--project-0-concurrent-0 via 42b751e35d01...
Getting source from Git repository
Fetching changes...
Initialized empty Git repository in /builds/0/project-0/.git/
Created fresh repository.
From /home/ebal/gitlab-runner/tmux
 * [new branch]      master     -> origin/master
Checking out 6bb70469 as master...

Skipping Git submodules setup
Restoring cache
Downloading artifacts
Running before_script and script
$ export -p NAME=tmux
$ export -p VERSION=$(awk '/^Version/ {print $NF}' tmux.spec)
$ mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
$ yum -y update &> /dev/null
$ yum -y install rpm-build curl gcc make automake autoconf pkg-config &> /dev/null
$ yum -y install libevent2-devel ncurses-devel &> /dev/null
$ cp $NAME.spec                  rpmbuild/SPECS/$NAME.spec
$ curl -sLo rpmbuild/SOURCES/$NAME-$VERSION.tar.gz$NAME/releases/download/$VERSION/$NAME-$VERSION.tar.gz
$ curl -sLo rpmbuild/SOURCES/bash-it.completion.bash
$ rpmbuild --define "_topdir ${PWD}/rpmbuild/" --clean -ba rpmbuild/SPECS/$NAME.spec &> /dev/null
$ cp rpmbuild/RPMS/x86_64/$NAME*.x86_64.rpm $CI_PROJECT_DIR/
Running after_script
Saving cache
Uploading artifacts for successful job
Job succeeded


and here is the tmux-3.1-1.el6.x86_64.rpm

$ ls -l artifacts/0/project-0
total 368
-rw-rw-rw- 1 root root    374 Apr 27 09:13
drwxr-xr-x 1 root root     70 Apr 27 09:17 rpmbuild
-rw-r--r-- 1 root root 365836 Apr 27 09:17 tmux-3.1-1.el6.x86_64.rpm
-rw-rw-rw- 1 root root   1115 Apr 27 09:13 tmux.spec

docker processes

if we run docker ps -a from another terminal, we see something like this:

$ docker ps -a
CONTAINER ID  IMAGE                        COMMAND                  CREATED        STATUS                   PORTS  NAMES
b5333a7281ac  d0957ffdf8a2                 "sh -c 'if [ -x /usr‌"   3 minutes ago  Up 3 minutes                    runner--project-0-concurrent-0-e6ee009d5aa2c136-build-4
70491d10348f  b6b00e0f09b9                 "gitlab-runner-build"    3 minutes ago  Exited (0) 3 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-3
7be453e5cd22  b6b00e0f09b9                 "gitlab-runner-build"    4 minutes ago  Exited (0) 4 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-2
1046287fba5d  b6b00e0f09b9                 "gitlab-runner-build"    4 minutes ago  Exited (0) 4 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-1
f1ebc99ce773  b6b00e0f09b9                 "gitlab-runner-build"    4 minutes ago  Exited (0) 4 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-0
42b751e35d01  gitlab/gitlab-runner:ubuntu  "/usr/bin/dumb-init ‌"   4 minutes ago  Up 4 minutes                    vigorous_goldstine

Sunday, 26 April 2020

Upgrading from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS

Server Edition

disclaimer: at this moment there is not an “official” server version of an 20.04 LTS available, so we we will use the development 20.04 release.


If this is a production server, do not forget to inform customers/users/clients that this machine is under maintenance before you start.


When was the last time you took a backup?
Now is a good time.
Try to verify your backup otherwise do not proceed.

Update you current system

Before continue with the dist upgrade to 20.04 LTS, we need to update & upgrade our current LTS version.

Login to your system:

~> ssh ubuntu1804

apt update
apt -y upgrade

reboot is necessary.


root@ubuntu:~# apt update
Hit:1 bionic InRelease
Hit:2 bionic-updates InRelease
Hit:3 bionic-backports InRelease
Hit:4 bionic-security InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
51 packages can be upgraded. Run 'apt list --upgradable' to see them.


# apt -y upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  bsdutils distro-info-data dmidecode fdisk grub-common grub-pc grub-pc-bin grub2-common landscape-common libblkid1 libfdisk1 libmount1 libnss-systemd
  libpam-systemd libsmartcols1 libsystemd0 libudev1 libuuid1 linux-firmware mount open-vm-tools python3-update-manager sosreport systemd systemd-sysv udev
  unattended-upgrades update-manager-core util-linux uuid-runtime
51 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 85.6 MB of archives.
After this operation, 751 kB of additional disk space will be used.
Get:1 bionic-updates/main amd64 bsdutils amd64 1:2.31.1-0.4ubuntu3.6 [60.3 kB]


# reboot

Do release upgrade

root@ubuntu:~# which do-release-upgrade


do-release-upgrade --help
root@ubuntu:~# do-release-upgrade --help
Usage: do-release-upgrade [options]

  -h, --help            show this help message and exit
  -V, --version         Show version and exit
  -d, --devel-release   If using the latest supported release, upgrade to the
                        development release
  --data-dir=DATA_DIR   Directory that contains the data files
  -p, --proposed        Try upgrading to the latest release using the upgrader
                        from $distro-proposed
  -m MODE, --mode=MODE  Run in a special upgrade mode. Currently 'desktop' for
                        regular upgrades of a desktop system and 'server' for
                        server systems are supported.
  -f FRONTEND, --frontend=FRONTEND
                        Run the specified frontend
  -c, --check-dist-upgrade-only
                        Check only if a new distribution release is available
                        and report the result via the exit code
  --allow-third-party   Try the upgrade with third party mirrors and
                        repositories enabled instead of commenting them out.
  -q, --quiet


# do-release-upgrade -m server
root@ubuntu:~# do-release-upgrade -m server
Checking for a new Ubuntu release
There is no development version of an LTS available.
To upgrade to the latest non-LTS develoment release
set Prompt=normal in /etc/update-manager/release-upgrades.


do-release-upgrade -m server -d
root@ubuntu:~# do-release-upgrade -m server -d
Checking for a new Ubuntu release
Get:1 Upgrade tool signature [1,554 B]

Get:2 Upgrade tool [1,344 kB]

Fetched 1,346 kB in 0s (0 B/s)

authenticate 'focal.tar.gz' against 'focal.tar.gz.gpg'
extracting 'focal.tar.gz'

at this moment, we will switch to a gnu/screen session

Reading cache

Checking package manager

Continue running under SSH?

This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.

If you continue, an additional ssh daemon will be started at port
Do you want to continue?

Continue [yN]

Press: y

Starting additional sshd

To make recovery in case of failure easier, an additional sshd will
be started on port '1022'. If anything goes wrong with the running
ssh you can still connect to the additional one.
If you run a firewall, you may need to temporarily open this port. As
this is potentially dangerous it's not done automatically. You can
open the port with e.g.:
'iptables -I INPUT -p tcp --dport 1022 -j ACCEPT'

To continue please press [ENTER]

Press Enter

update repos

Reading package lists... Done
Building dependency tree
Reading state information... Done
Hit bionic InRelease
Get:1 bionic-updates InRelease [88.7 kB]

Get:2 bionic-backports InRelease [74.6 kB]

Get:3 bionic-security InRelease [88.7 kB]
Get:4 bionic-updates/main amd64 Packages [916 kB]
Fetched 1,168 kB in 0s (0 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done

Updating repository information
Get:1 focal InRelease [265 kB]

Get:32 focal-security/multiverse amd64 c-n-f Metadata [116 B]
Fetched 57.3 MB in 6s (1,247 kB/s)

Checking package manager
Reading package lists... Done
Building dependency tree
Reading state information... Done

Calculating the changes

Calculating the changes

Do you want to start the upgrade?

3 packages are going to be removed. 105 new packages are going to be
installed. 428 packages are going to be upgraded.

You have to download a total of 306 M. This download will take about
3 minutes with your connection.

Installing the upgrade can take several hours. Once the download has
finished, the process cannot be canceled.

 Continue [yN]  Details [d]

Press y

(or review by pressing d )

Fetching packages


Get:3 focal/main amd64 libcrypt1 amd64 1:4.4.10-10ubuntu4 [78.2 kB]
Get:4 focal/main amd64 libc6 amd64 2.31-0ubuntu9 [2,713 kB]


at some point a question will pop:

  • Restart services during package upgrade without asking ?

I answered Yes but you should answer this the way you prefer.


patience is a virtue

Get a coffee or tea. Read a magazine.

Patience is a virtue

till you see a jumping animal.


Configuration file '/etc/systemd/resolved.conf'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** resolved.conf (Y/I/N/O/D/Z) [default=N] ?

I answered this Y, I will change it later.


same here

Configuration file '/etc/vim/vimrc'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** vimrc (Y/I/N/O/D/Z) [default=N] ? Y

ssh conf


Remove obsolete packages

and finally

Progress: [ 99%]
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Processing triggers for initramfs-tools (0.136ubuntu6) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-26-generic
Processing triggers for dbus (1.12.16-2ubuntu2) ...
Reading package lists... Done
Building dependency tree
Reading state information... Done

Searching for obsolete software
Reading state information... Done

Remove obsolete packages?

59 packages are going to be removed.

 Continue [yN]  Details [d]

Press y to continue


are you ready to restart your machine ?

System upgrade is complete.

Restart required

To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.

Continue [yN]

Press y to restart

LTS 20.04

Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-26-generic x86_64)

  System information as of Sun 26 Apr 2020 10:34:43 AM UTC

  System load:  0.52               Processes:               135
  Usage of /:   24.9% of 19.56GB   Users logged in:         0
  Memory usage: 3%                 IPv4 address for enp1s0:
  Swap usage:   0%

 * Ubuntu 20.04 LTS is out, raising the bar on performance, security,
   and optimisation for Intel, AMD, Nvidia, ARM64 and Z15 as well as
   AWS, Azure and Google Cloud.

0 updates can be installed immediately.
0 of these updates are security updates.

Last login: Sun Apr 26 07:50:39 2020 from
$ cat /etc/os-release

VERSION="20.04 LTS (Focal Fossa)"
PRETTY_NAME="Ubuntu 20.04 LTS"
Tag(s): ubuntu, 18.04, 20.04, LTS

Saturday, 25 April 2020

Ubuntu Server 20.04 LTS walkthrough

basic server installation





















Tag(s): ubuntu, 20.04

Monday, 20 April 2020

👾 Game-streaming without the "cloud" 🌩️

With increasing bandwidths, live-streaming of video games is becoming more and more popular – and might further accelerate the demise of the desktop computer. Most options are “cloud-gaming” services based on subscriptions where you don’t own the games and are likely to be tracked and monetised for your data. In this blog post I present the solution I built at home to replace my “living room computer”.


I describe how to stream a native Windows game (Divinity Original Sin 2 – installed DRM-free from GOG) via Steam Remote Play from my Desktop (running Devuan GNU/Linux) in 4K resolution and maximum quality directly to my TV (running Android). On the TV I then play the game with my flatmate using two controllers: one Xbox Controller connected via USB, one Steam-Controller connected via Bluetooth. The process of getting there is not trivial or flawless, but gaming itself works perfectly without artefacts or lag.

(The computer in the above photo is not used any longer.)


The big “cloud” streaming services for audio and video are incredibly convenient. In my opinion they address the following problems very well:

  1. They make content available on hardware that wouldn’t have the capability to store the content.
  2. This includes all devices of the user, not just a single (desktop/laptop) computer.
  3. They provide access to a lot of content at a fixed monthly price (especially useful for people who haven’t had the possibility to build up their “library”).

On the other hand, they usually force you to use software that is non-free and/or not trustworthy (if they support your OS at all). You risk losing access to all content when the subscription is terminated. And – often overlooked – they monitor you very closely: what you watch/listen to, when and where you do it, when you press pause and for how long etc. I suspect that at some point this data will be more valuable than the income generated from subscription fees, but this is not important here.

It was only a matter of time before video-gaming would also become part of the streaming business and there are now multiple contenders: Google Stadia and Playstation Now are typical “streaming services” like Netflix for video; the games are included in the monthly fee. They have all the benefits and problems discussed above. Other services like GeForce Now and Blade Shadow provide computation and streaming infrastructure but leave it to you to provide the games (manually or via gaming platforms). This has slightly different implications, but I dont’ want to discuss these further, because I haven’t used any of them and likely won’t in the future.

In any case, I am of course not a big fan of being tracked and anything I can set up with my own infrastructure at home, I try to do! [I actually don’t really spend that much time gaming nowadays, but setting this up was still fun]. In the past I have had an extra (older) computer beside the TV that I used for casual gaming. This has become to old (and noisy) to play current games, so instead I want stream the game from my own desktop computer in a different room.

Disclaimer: It should be noted that this “only” provides feature 2. above, i.e. convergence of different devices. Also this setup includes using various non-free software components, i.e. it also does not solve all of the problems mentioned above.

Setting up the host

The “host” is the computer that renders the game, in my case the desktop computer. Before attempting to stream anything, we need to make sure that everything works on the host, i.e. we need to be able to play the game, use the controllers etc.


AMD Ryzen 6c/12t @ 3.6Ghz 32GB GeForce RTX 2060S

Since the host needs to render the game, it needs decent hardware. Note that it needs to be able to encode the video-stream at the same time as rendering the game which means the requirements are even higher than usual. But of course all of this also depends on the exact games that you are playing and which resolution you want to stream. My specs are shown above.

The game

I chose Divinity Original Sin 2 for this article because that’s what I am playing right now and because I wanted to demonstrate that this even works with games that are not native to Linux (although I usually don’t buy games that don’t have native versions). If you buy it, I recommend getting it on GOG, because games are DRM-free there (they work without internet connection and stay working even if GOG goes bankrupt). The important thing here is that the game does not need to be a Steam game even though we will use Steam Remote Play for streaming. Buying it on Steam will make the process a little easier though.

Install the game using wine (wine in Devuan/Debian repos). I assume that you have installed it to /games/DOS2 and that you have setup a “windows disk G:" for /games. If not, adjust paths accordingly.


If you haven’t done so already, install Steam. It is available in Debian/Devuan non-free repositories as steam but will install and self-update itself in a hidden subfolder of your home-directory upon first start. You need a steam-account (free) and you need to be logged in for everything to work. I really dislike this and it means that Steam quite likely does gather data about you. I suspect that using non-Steam games makes it more difficult to track you, but I have not done any research on this. See the end of this post for possible alternatives.

The first important thing to know about Steam on Linux is that it ships many system libraries, but it doesn’t ship everything that it needs and it gives you no diagnostic about missing stuff nor are UI elements correctly disabled when the respective feature is not available. This includes support for hardware video encoding and for playing windows games.

Hardware-video encoding is a feature you really want because it reduces latency and load on your CPU. For reasons I don’t understand, Steam uses neither libva (the standard interface for video acceleration on free operating systems) nor vdpau (NVIDIA-specific but also free/open). Instead it uses the proprietary NVENC interface. On Debian / Devuan his has been patched out of all the libraries and applications, so you need to make sure that you get your libraries and apps like ffmpeg from the Debian multimedia project which has working versions. I am not entirely sure which set of libraries/apps is required, for me it was sufficient to install libnvidia-encode1 and do an apt upgrade after adding the debian multimedia repo. Note that only installing libnvidia-encode1 from the official repo was not sufficient. See the troubleshooting section on how to diagnose problems with hardware video encoding.

To play Windows games with Steam, you can use Steam’s builtin windows emulator called Proton (here’s a current articel on it). It’s a fork of Wine with additional patches (most of which are upstreamed to official wine later). Unfortunately it is not installed after a fresh install of Steam on Linux and I found no explicit way of installing it (the interface still suggests it’s there, though!). To get it, select “Enable Steam Play for all titles” in the “Advanced” “Steam Play Settings” in the settings. This activates usage of Proton for Windows games not officially supported. Then install the free Windows game 1982 from inside Steam. This will automatically install Proton which is then listed as an installed application and updated automatically in the future. You can try the game to make sure Proton works as expected. Alternatively, buy the actual game on Steam and skip the next paragraph.

Now go to “Games” → “Add a Non-Steam Game to my Library…", then go to “BROWSE”, show all extensions and find the executable file of the game. In our case this is /games/DOS2/DefEd/bin/EoCApp.exe or G:\DOS2\DefEd\bin\EoCApp.exe (yes, it’s not in the top-level directory of the game). If your path contains spaces, it will break (without diagnostics). To fix this, simply edit the shortcut created for the game again and make sure the path is right and “set launch options” is empty (part of your path may have ended up there). In this dialog you can also explicitly state that you wish to use Proton to run the game (confusingly it will show multiple versions even if none of those are installed). You can also give the game a nicer name or icon if desired.

Test the game

You are now ready to test the game. Simply click on the respective button. Now is also a good time for testing the controller(s), because if they don’t work now, they likely won’t later on. There are many tutorials on using the (Steam) controller on Linux and there are also some notes in the troubleshooting section.

This is also a good point in time to update the firmware of the steam controller to the newest version; we will need that later on.

Setting up the client

The client is my TV, because that runs Android TV and there is a SteamLink application for Android. If your TV has a different OS, you could probably use a small set-top-box built around a cheap embedded board and use Android (or Linux) as the client OS from that. I suppose all of this would be possible on an Android Tablet, as well. I recommend connecting the TV via wired network to the host, but WiFi works at lower resolutions, too.

Controllers and Android

The Xbox controller is just plugged into the USB-port of the TV and required no further configuration. The Steam controller was a little more tricky, and I had no luck getting it to work via USB. However, it comes with Bluetooth support (only after upgrading the firmware!).

Establishing the initial Bluetooth connection between the TV and the controller was surprisingly difficult. Start the Steam-controller with Steam+Y pressed or alternatively with Steam+B pressed and only do so after initiating device search on the TV. If it does not work immediately, keep trying! After a few attempts, the TV should state that it found the device; it calls it SteamController but recognises it as a Keyboard+Mouse. That’s ok.

After the connection has been established once, the controller will auto-connect when turned off and on again (do not press Y or B during start!). The controller’s mouse feature is surprisingly useful in the Android TV when you use Android apps that are not optimised for the TV.

You need to install the SteamLink application from GooglePlay or through another channel like the Aurora Store (my Android TV is not associated with a Google account so I cannot use GooglePlay). After opening the app, Android will ask whether it should allow the app to access the Xbox controller which you have to agree to everytime (the “do this in the future” checkbox has no effect). Confusingly the TV then notifies you that the controller has been disconnected. This just means that the app controls it, I think. The app takes control of the Steam controller without asking and switches it from Keyboard+Mouse mode into Controller mode, so you can use the Joystick to navigate the buttons in the app.

It should automatically detect Steam running on the host and offer to make a connection. Press A on the controller or select “Start Playing”. The connection has to be verified once on the host for obvious reasons, but if everything works well you should now see the “Steam Big Picture Mode” interface on you TV. Your Desktop has switched to this at the same time (in general, the TV will now mirror your desktop). Maybe first try a native Steam game like the aforementioned “1982” to see if everything works.

Next try to start the Game we setup above via its regular entry in your Steam Library. Note that upon starting, the screen will initially flash black and return to Steam; the game is starting in the background, do not start it again, it just needs a second!

Everything should work now! If you hear audio but your screen stays black and shows a “red antenna symbol”, see the troubleshooting section below.

You can press the Steam-Button to return to Steam (although this sometimes breaks for non-native Steam games). You can also long-press the “back/select”-button on the controller to get a SteamLink specific menu provided by the Client. It can be used to launch an on-screen keyboard and force-quit the connection to the host and return to the SteamLink interface.

Video resolutions and codecs

If everything worked so far you are likely playing in 1080p. To change this to 4k resolution, go to the SteamLink app’s settings (wheel symbol) and then to “streaming settings” and then to “advanced”. You can increase the resolution limit there and also enable “HEVC Video” which improves video quality / reduces bitrate. If your desktop does not support streaming HEVC, SteamLink will establish no connection at all. See the troubleshooting section below if you get a black screen and the “red antenna symbol”.


The only viable alternative to Steam’s Remote Play for streaming your own games from your own hardware is NVIDIA GameStream – not to be confused with NVIDIA GeForce Now, the “cloud gaming” service. The advantages of GameStream over Steam Remote Play seem to be the following:

  1. The protocol is documented and there are good Free and Open Source client implementations.
  2. It claims better performance by tying closer into the host’s drivers.
  3. No online-connection or sign-up required like with Steam.

Also NVIDIA is a hardware company and even if the software is proprietary, it might be less likely to spy on you. The disadvantages are:

  1. The host software is only available for Windows.
  2. Only works with NVIDIA GPUs on the host machine, not AMD.

I could not try this, because I don’t have a Windows host and I wanted to stream from GNU/Linux.

Post scriptum

As you have seen the process really still has some rough edges, but I am honestly quite surprised that I did manage to set this up and that it works with really good quality in the end. Although I don’t like Steam because of its DRM, I have to admit that its impressive how much work they put into improving the driver situation on GNU/Linux and supporting such setups as discussed here. Consider that I didn’t even buy a game from Steam!

I would still love to see a host implementation of NVIDIA GameStream that runs on GNU/Linux. Even better would of course be a fully Free and Open Source solution.

On a sidenote: the setup I showed here can be used to stream any kind of application, even just the regular desktop if you want to (check out the advanced client options!).

Hopefully Steam Remote Play (and NVIDIA GameStream) can delay the full transition to “cloud gaming” a little.


No hardware video encoding To see if Steam is actually using your hardware encoding, look in the following log-file:

You should have something like:

"CaptureDescriptionID"  "Desktop OpenGL NV12 + NVENC HEVC"

The NVENC part is important. If you instead have the following:

"CaptureDescriptionID"  "Desktop OpenGL NV12 + libx264 main (4 threads)"

It means you have software encoding. Play around with ffmpeg to verify that NVENC works on your system. The following two commands should work:

% ffmpeg -h encoder=h264_nvenc
% ffmpeg -h encoder=hevc_nvenc

The second one is for the HEVC codec. If you are told that the selected encoder is not available, something is broken. Check to see if you correctly upgraded to the libraries from the Debian multimedia project.

Controller not working at all on host Likely your user account does not have permissions to access the device. It worked for me after making sure that my user was in the following groups:
audio dip video plugdev games scanner netdev input
Controller working in Steam but not in game

When you start Steam but leave BigPicture mode (Alt+Tab or minimise), Steam usually switches the controller config back to “Desktop mode” which means Keyboard+Mouse emulation. If a game is started then, it will not detect any controller. The same thing seams to happen for certain non-steam games started through Steam. A workaround is going into the Steam controller settings and selecting the “Gamepad” configuration also as default for “Desktop mode”.

Black screen and "red antenna symbol"

This happens when there is a resolution mismatch somewhere. Note that we have multiple places and layers where resolution can be set and that steam doesn’t always manage to sync these: the host (operating system, Steam, In-Game), the client (operating system, SteamLink). Additionally, Steam apparently can stream in a lower resolution than is set on either host or target.

I initially had this problem when streaming from 4k-host onto a SteamLink app that was configured to only accept 1080p. Changing the host resolution manually to 1080p before starting Steam solved this problem.

Now that everything (Host, In-Game, SteamLink on client) are configured to 4k, I still get this problem, because apparently Steam still attempts to reduce transmission resolution when starting the game. For obscure reasons the following workaround is possible: After starting the game, long-press “back/select” on the Controller to get to the SteamLink menu and select “Stop Game” there. This fixes the Black Screen and the regular game screen appears at 4K resolution. I suspect that “Stop game” terminates Steam’s wrapper around the game’s process that screws up the resolution. The devil knows why this does not terminate the game.

SteamLink produces black screen and nothing else

When switching in out of SteamLink via Android (e.g. TV remote), SteamLink sometimes doesn’t recover. Probably something goes wrong with putting the app to standby, but it’s also not something you typically would. Just quit the app correctly.

In any case, only a full reboot of the TV seems to fix this and make SteamLink usable again.

Sunday, 19 April 2020

Terminator 1.92 released

  • Mäh?
  • 11:58, Sunday, 19 April 2020

Do you still remember the project Terminator? People around the world are still using this tool as their terminal emulator of choice for Linux- and Unix-based systems (including Mac OS).

Unfortunately the development stagnated a bit since 2017 and within the last three years there had to be a lot of things to do. Terminator is written in Python, it had to be migrated to Python 3 for example, as distributions started to think about dropping support for Python 2. Packagers of several distributions started maintaining their own patches to support Terminator with Python 3, until today.

Two weeks ago, things have changed. A project at GitHub as been created, the source code has been migrated from Bazaar to Git and even some package maintainers from Arch Linux and Fedora contributed and were working hard towards whats happened this weekend. There is a Terminator 1.92 release available and you can find Terminator at it's new home here:

The update for Terminator 1.92 includes a lot of interesting changes you surely were waiting for, including the support for Python 3 and a bunch of bug fixes, for example you can now open links with just Ctrl+Click on the link. You can find a detailed change log right here:

Of course, Fedora is one of the first distributions to update Terminator to 1.92 and if you're using Fedora or a RHEL8 based system, you can shortly update to it via:

dnf -y --enable-repo=updates-testing terminator

As usual you can leave your feedback and some Karma for the update:

The Terminator development team at GitHub will be lucky if you want to support the project and join. You don't need to be a programmer to support, in an open source project there is always something to do.

Friday, 17 April 2020

Should KDE fork CHMLib?

CHMLib is a library to handle CHM files.

It is used by Okular and other applications to show those files.

It hasn't had a release in 11 years.

It is packaged by all major distributions.

A few weeks ago I got annoyed because we need to carry a patch in Okular flathub because the code is not great and it defines it's own int types.

I tried contacting the upstream author, but unsurprisingly after 11 years he doesn't seem to care much and got no answer.

Then i looked saw that various distributions carry different random set of patches, not great.

So I said, OK, maybe we should just fork it, and emailed 14 people listed at repology as package maintainers for CHMLib saying how they would react to KDE forking CHMLib in an effort to do some basic maintenance, i.e. get it to compile without need for patches, make sure all the patches different distributions has are in the same place, etc.

1 packager answered saying "great"
1 packager answered "nah we want to follow upstream" (... which part of upstream is dead did they not understand?)
1 person answered saying "i haven't been packaging for $DISTRO for ages" (makes sense given how old the package is)
1 person answered saying "not a maintainer anymore but i think you should not break API/ABI of CHMLib" (which makes sense, never said that was planned)

And that's it, so only 4 out of 14 answers and only one of them encouraging.

So I'm asking *YOU*, should we push for a fork or I should stop this crazyness and do something more productive?

Wednesday, 08 April 2020

Geany and Geany-Plugins for EPEL8

  • Mäh?
  • 20:49, Wednesday, 08 April 2020

If you're a lucky user of a RedHat Enterprise Linux based system, you're probably already aware of the Enterprise Packages for Enterprise Linux from the Fedora Project. In case you've missed the flyweight IDE Geany and it's plugins there this is probably some good news for you: Geany is coming to EPEL8 soon!

The update will hit the testing repositories in the next days and needs your karma here:

Tuesday, 31 March 2020

System Hackers meeting - Lyon edition

For the 4th time, and less than 5 months after the last meeting, the FSFE System Hackers met in person to coordinate their activities, work on complex issues, and exchange know-how. This time, we chose yet another town familiar to one of our team members as venue – Lyon in France. What follows is a report of this gathering that happened shortly before #stayhome became the order of the day.

For those who do not know this less visible but important team: The System Hackers are responsible for the maintenance and development of a large number of services. From the website’s deployment to the mail servers and blogs, from Git to internal services like DNS and monitoring, all these services, virtual machines and physical servers are handled by this friendly group that is always looking forward to welcoming new members.

Interestingly, we have gathered in the same constellation as in the hackathon before, so Albert, Florian, Francesco, Thomas, Vincent and me tackled large and small challenges in the FSFE’s systems. But we have also used the time to exchange knowledge about complex tasks and some interconnected systems. The official part was conducted in the fascinating Astech Fablab, but word has it that Ninkasi, an excellent pub in Lyon, was the actual epicentre of this year’s meeting.

Sharing is caring

Saturday morning after reviewing open tasks and setting our priorities, we started to share more knowledge about our services to reduce bottlenecks. For this, I drew a few diagrams to explain how we deploy our Docker containers, how our community database interacts with the mail and lists server, and how DNS works at the FSFE.

To also help the non-present system hackers and “future generations”, I’ve added this information to a public wiki page. This could also be the starting point to transfer more internal knowledge to public pages to make maintenance and onboarding easier.

Todo? Done!

Afterwards, we focused on closing tasks that have been open for a longer time:

  • The DNS has been a big issue for a long time. Over the past months we’ve migrated the source for our nameserver entries from SVN to Git, rewrote our deployment scripts, and eventually upgraded the two very sensitive systems to Debian 10. During the meeting, we came closer to perfection: all Bind configuration cleaned from old entries, uniformly formatted, and now featuring SPF, DMARC and CAA records.
  • For a better security monitoring of the 100+ mailing lists the FSFE hosts, we’ve finalised the weekly automatic checks for sane and safe settings, and a tool that helps to easily update the internal documentation.
  • Speaking of monitoring: we did lack proper monitoring of our 20+ hosts for availability, disk usage, TLS certificates, service status and more. While we tried for a longer time to get Prometheus and Grafana doing what we need, we performed a 180° turn: now, there is a Icinga2 installation running that already monitors a few hosts and their services – deployed with Ansible. In the following weeks we will add more hosts and services to the watched targets.
  • We plan to migrate our user-unfriendly way to share files between groups to Nextcloud, including using some more of the software’s capabilities. During the weekend, we’ve tested the instance thoroughly, and created some more LDAP groups that are automatically transposed to groups in Nextcloud. In the same run, Albert shared some more knowledge about LDAP with Vincent and me, so we get rid of more bottlenecks.

Then, it was time to deal with other urgent issues:

  • Some of us worked on making our systems more resilient against DDoS attacks. Over the Christmas season, we became a target of an attack. The idea is to come up with solutions that are easy to deploy on all our web services while keeping complexity low. We’ve tested some approaches and will further work on coming up with solutions.
  • Regarding webservers, we’ve updated the TLS configurations on various services to the recommended settings, and also improved some other settings while touching the configuration files.
  • We intend to ease people encrypting their emails with GnuPG. That is why we experimented with WKD/WKS and will work on setting up this service. As it requires some interconnection with others services, this will take us some more time unfortunately.
  • On the maintenance side of things, we have upgraded all servers except one to the latest Debian version, and also updated many of our Docker images and containers to make use of the latest security and stability improvements.
  • The FSFE hosts a few third party services, and unfortunately they have been running on unmaintained systems. That is why we set up a brand new host for our sister organisation in Latin America so they can eventually migrate, and moved the website to our automatic CI/CD setup via Drone/Docker.

The next steps and developments

As you can see, we completed and started to tackle a lot of issues again, so it won’t become boring in our team any time soon. However, although we should know better, we intend to “change a running system”!

While the in-person meetings have been highly important and also fun, we are in a state where knowledge and mutual trust are further distributed between the members, the tasks separated more clearly and the systems mostly well documented. So part of our feedback session was the question whether these meetings in the 6-12 month rhythm are still necessary.

Yes, they are, but not more often than once a year. Instead, we would like to try virtual meetings and sprints. Before a sprint session, we would discuss all tasks (basically go through our internal Kan board), plan the challenges, ask for input if necessary, and resolve blockers as early as possible. Then, we would be prepared for a sprint day or afternoon during which everyone can work on their tasks while being able to directly contact other members. All that should happen over a video conference to have a more personal atmosphere.

For the analogue meetings, it was requested to also plan tasks and priorities beforehand together, and focus on tasks that require more people from the group. Also, we want to have more trainings and system introductions like we’ve just had to reduce dependencies on single persons.

All in all, this gathering has been another successful meeting and will set a corner stone for exciting new improvements for both the systems and the team. Thanks to everyone who participated, and a big applause to Vincent who organised the venue and the social activities!

Sunday, 29 March 2020

RSIBreak 0.12.12 released!

All of you that are in using a computer for a long time should use it!

Changes from 0.12.11:
* Don't reset pause counter on very short inputs that can just be accidental.
* Improve high dpi support
* Translation improvements
* Compile with Qt 5.15 beta
* Minor code cleanup

Friday, 27 March 2020

Cruising sailboat electronics setup with Signal K

I haven’t mentioned this on the blog earlier, but in the end of 2018 we bought a small cruising sailboat. After some looking, we went with a Van de Stadt designed Oceaan 25, a Dutch pocket cruiser from the early 1980s. S/Y Curiosity is an affordable and comfortable boat for cruising with 2-4 people, but also needed major maintenance work.

Curiosity sailing on Havel with Royal Louise

The refit has so far included osmosis repair, some fixes to the standing rigging, engine maintenance, and many structural improvements. But this post will focus on the electronics and navigation aspects of the project.

12V power

When we got it, the boat’s electrics setup was quite barebones. There was a small lead-acid battery, charged only when running the outboard. Light control was pretty much all-or-nothing, either we were running inside and navigation lights, or not. Everything was wired with 80s spec components, using energy-inefficient lightbulbs.

Looking at the state of the setup, it was also unclear when the electrics had been used for anything else than starting the engine last time.

Before going further with the electronics setup, all of this would have to be rebuilt. We made a plan, and scheduled two weekends in summer 2019 for rewiring and upgrading the electricity setup of the boat.

First step was to test all existing wiring with a multimeter, and label and document all of it. Surprisingly, there were only couple of bad connections from the main distribution panel to consumers, so for most part we decided to reuse that wiring, but just with a modern terminal block setup.

All wires labeled and being reconnected

For most part we used a dymo label printer, with the labels covered with a transparent heat shrink.

We replaced the old main control panel with a modern one with the capability to power different parts of the boat separately, and added some 12V and USB sockets next to it.

New battery charger and voltmeter

All internal lighting was replaced with energy-efficient LEDs, and we added the option of using red lights all through the cabin for preserving night vision. A car charger was added to the system for easier battery charging while in harbour.

Next steps for power

With this, we had a workable lighting and power setup for overnight sailing. But next obvious step will be to increase the range of our boat.

For that, we’re adding a solar panel. We already have most parts for the setup, but are still waiting for the customized NOA mounting hardware to arrive. And of course the current COVID-19 curfews need to lift before we can install it.

Until we have actual data from our Victron MPPT charge controller, I’ve run some simulations using NASA’s insolation data for Berlin on how much the panel ought to increase our cruising range.

Range estimates for Curiosity solar setup

The basis for boat navigation is still the combination of a clock, a compass, and a paper chart (as well as a sextant on the open ocean). However, most modern cruising boats utilize some electrical tools to aid the process of running the boat. These typically come in form a chartplotter and a set of sensors to get things like GPS position, speed, and the water depth.

Commercial marine navigation equipment is a bit like computer networking in the 90s - everything is expensive, and you pretty much have to buy the whole kit from a single vendor to make it work. Standards like NMEA 0183 exist, but “embrace and extend” is typical vendor behaviour.

Signal K

Being open source hackerspace people, that was obviously not the way we wanted to do things. Instead of getting locked into an expensive proprietary single-vendor marine instrumentation setup, we decided to roll our own using off-the-shelf IoT components. To serve as the heart of the system, we picked Signal K.

Signal K is first of all a specification on how marine instruments can exchange data. It also has an open source implementation in Node.js. This allows piping in data from all of the relevant marine data buses, as well as setting up custom data providers. Signal K then harmonizes the data, and makes it available both via modern web APIs, and in traditional NMEA formats. This enables instruments like chartplotters also to utilize the Signal K enriched data.

We’re running Signal K on a Raspberry Pi 3B+ powered by the boat battery. With a GPS dongle, this was already enough to give some basic navigation capabilities like charts and anchor watch. We also added a WiFi hotspot with a LTE uplink to the boat.

Tracking some basic sailing exercises via Signal K

To make the system robust, installation is automated via Ansible, and easy to reproduce. Our boat GitHub repo also has the needed functionality to run a clone of our boat’s setup on our laptops via Docker, which is great when developing new features.

Signal K has a very active developer community, which has been great for figuring out how the extend the capabilities of our system.


We’re using regular tablets for navigation. The main chartplotter is a cheap old waterproof Samsung Galaxy Tab Active 8.0 tablet that can show both the Freeboard web-based chartplotter with OpenSeaMap charts, and run the Navionics Boating app to display commercial charts. Navionics is also able to receive some Signal K data over the boat WiFi to show things like AIS targets, and to utilize the boat GPS.

Samsung T360 with Curiosity logo

As a backup we have our personal smartphones and tablets.

Anchor watch with Freeboard and a tablet

Inside the cabin we also have an e-ink screen showing the primary statistics relevant to the current boat state.

e-ink dashboard showing boat statistics

Environmental sensing

Monitoring air pressure changes is important for dealing with the weather. For this, we added a cheap barometer-temperature-humidity sensor module wired to the Raspberry Pi, driven with the Signal K BME280 plugin. With this we were able to get all of this information from our cabin into Signal K.

However, there was more environmental information we wanted to get. For instance, the outdoor temperature, the humidity in our foul weather gear locker, and the temperature of our icebox. For these we found the Ruuvi tags produced by a Finnish startup. These are small weatherproofed Bluetooth environmental sensors that can run for years with a coin cell battery.

Ruuvi tags for Curiosity with handy pouches

With Ruuvi tags and the Signal K Ruuvi tag plugin we were able to bring a rich set of environmental data from all around the boat into our dashboards.

Anchor watch

Like every cruising boat, we spend quite a lot of nights at anchor. One important safety measure with a shorthanded crew is to run an automated anchor watch. This monitors the boat’s distance to the anchor, and raises an alarm if we start dragging.

For this one, we’re using the Signal K anchor alarm plugin. We added a Bluetooth speaker to get these alarms in an audible way.

To make starting and stopping the anchor watch easier, I utilized a simple Bluetooth remote camera shutter button together with some scripts. This way the person dropping the anchor can also start the anchor watch immediately from the bow.

Camera shutter button for starting anchor watch


Automatic Identification System is a radio protocol used by most bigger vessels to tell others about their course and position. It can be used for collision avoidance. Having an active transponder on a small boat like Curiosity is a bit expensive, but we decided we’d at least want to see commercial traffic in our chartplotter in order to navigate safely.

For this we bought an RTL-SDR USB stick that can tune into the AIS frequency, and with the rtl_ais software, receive and forward all AIS data into Signal K.

Tracking AIS targets in Freeboard

This setup is still quite new, so we haven’t been able to test it live yet. But it should allow us to see all nearby bigger ships in our chartplotter in realtime, assuming that we have a good-enough antenna.

Putting it all together

All together this is quite a lot of hardware. To house all of it, we built a custom backing plate with 3D-printed brackets to hold the various components. The whole setup is called Voronoi-1 onboard computer. This is a setup that should be easy to duplicate on any small sailing vessel.

The Voronoi-1 onboard computer

The total cost so far for the full boat navigation setup has been around 600€, which is less than just a commercial chartplotter would cost. And the system we have is both easy to extend, and to fix even on the go. And we get a set of capabilities that would normally require a whole suite of proprietary parts to put together.

Next steps for navigation setup

We of course have plenty of ideas on what to do next to improve the navigation setup. Here are some projects we’ll likely tackle over the coming year:

  • Adding a timeseries database and some data visualization
  • 9 degrees of freedom sensor to track the compass course, as well as boat heel
  • Instrumenting our outboard motor to get RPMs into Signal K and track the engine running time
  • Wind sensor, either open source or commercial

If you have ideas for suitable components or projects, please get in touch!

Source code

Huge thanks to both the Signal K and Hackerfleet communities and the Curiosity crew for making all this happen.

Now we just wait for the curfews to lift so that we can get back to sailing!

Curiosity Crew Badge

Thursday, 26 March 2020

Jitsi and the power of shortcuts

During the last weeks I have used more video calls than in the past; and often the software used for that was Jitsi meet. That also meant that it made sense for me to look into how I and others can use this software more efficiently -- which for me means looking into the available shortcuts.

If you are using Jitsi meet, e.g. on one of its public instances, you can press ? and then will see the list of shortcuts:

F - Show or hide video thumbnails
M - Mute or unmute your microphone
V - Start or stop your camera
A - Manage call quality
C - Open or close the chat
D - Switch between camera and screen sharing
R - Raise or lower your hand
S - View or exit full screen
W - Toggle tile view
? - Show or hide keyboard shortcuts
SPACE - Push to talk
T - Show speaker stats
0 - Focus on your video
1-9 - Focus on another person's video

What I use most of the time is Mto quickly switch between being muted or unmuted; sometimes then in combination with first muting and then press SPACE while quickly saying something in a larger group and as soon as I stop pressing it, I am muted again.

Another often used one for me is V to turn off / turn on my webcam in combination with A to quickly reduce the video quality (unfortunately I have not found a way that the default is lower video quality).

And finally, especially when I am moderating meetings, I encourage people to use Rto indicate if someone wants to say something. This way I do not have to ask several times in a meeting if someone wants to add a point, or if there is another question. (This is also a feature for which I am missing a quick access in the Jitsi meet mobile application.)

In general I encourage you to check what shortcuts are available in a software you have to use more often, as in my experience you will highly benefit from that knowledge over time.

Monday, 23 March 2020

How and why to properly write copyright statements in your code

This blog post was not easy to write as it started as a very simple thing intended for developers, but later, when I was digging around, it turned out that there is no good single resource online on copyright statements. So I decided to take a stab at writing one.

I tried to strike a good balance between 1) keeping it short and to the point for developers who just want to know what to do, and 2) FOSS compliance officers and legal geeks who want to understand not just best practices, but also the reasons behind them.

If you are extremely short on time, the TL;DR should give you the bare minimal instructions, but if you have just 2 minutes I would advise you to read the actual HowTo a bit lower below.

Of course, if you have about 18 minutes of time, the best way is always to start reading at the beginning and finish at the end.

Where else to find this article

A copy of this blog is available also on Liferay Blog.
Haksung Jang (장학성) was awesome enough to publish a Korean translation.


Use the following format:

SPDX-FileCopyrightText: © {$year_of_file_creation} {$name_of_copyright_holder} <{$contact}>

SPDX-License-Identifier: {$SPDX_license_name}

… put that in every source code file and go check out (and follow) best practices.

E.g. for a file that I created today and I released under the BSD-3-Clause license, I would use put the following as a comment at the top of the source code file:

SPDX-FileCopyrightText: © 2020 Matija Šuklje <>

SPDX-License-Identifier: BSD-3-Clause

Introduction and copyright basics

Copyright is automatic (since the Berne convention) and any work of authorship is automatically protected by it – essentially giving the copyright holder1 exclusive power over its work. In order for your downstream to have the rights to use any of your work – be that code, text, images or other media – you need to give them a license to it.

So in order for you to copy, implement, modify etc. the code from others, you need to be given the needed rights – i.e. a license2 –, or make use of a statutory limitation or exception3. And if that license has some obligations attached, you need to meet them as well.

In any case, you have to meet the basic requirements of copyright law as well. At the very least you need to have the following two in place:

  • attribution – list the copyright holders and/or authors (especially in jurisdictions which recognise moral rights);
  • license(s) – since a license is the only thing that gives anybody other than the copyright holder themself the right to use the code, you are very well advised to have a notice of the the license and its full text present – this goes for both for your outbound licenses and the inbound licenses you received from others by using 3rd party works, such as copied code or libraries.

Inbound vs. outbound licenses

The license you give to your downstream is called an outbound license, because it handles the rights in the code that flow out of you. In turn that same license in the same work would then be perceived by your downstream as their inbound license, as it handles the rights in the code that flows into them.

In short, licenses describing rights flowing in are called inbound licenses, and the licenses describing rights flowing out are called outbound licenses.

The good news is that attribution is the author’s right, not obligation. And you are obliged to keep the attribution notices only insofar as the author(s) made use of that right. Which means that if the author has not listed themselves, you do not have to hunt them down yourself.

Why have the copyright statement?

Which brings us to the question of whether you need to write your own copyright statement4.

First, some very brief history …

The urge to absolutely have to write copyright statements stems from the inertia in the USA, as it only joined the Berne convention in 1989, well after computer programs were a thing. Which means that until then the US copyright law still required an explicit copyright statement in order for a work to be protected.

Copyright statements are useful

The copyright statement is not required by law, but in practice very useful as proof, at best, and indicator, more likely, of what the copyright situation of that work is. This can be very useful for compliance reasons, traceability of the code etc.

Attribution is practically unavoidable, because a) most licenses explicitly call for it, and if that fails b) copyright laws of most jurisdictions require it anyway.

And if that is not enough, then there is also c) sometimes you will want to reach the original author(s) of the code for legal or technical reasons.

So storing both the name and contact information makes sense for when things go wrong. Finding the original upstream of a runaway file you found in your codebase – if there are no names or links in it – is a huge pain and often includes (currently still) expensive specialised software. I would suspect the onus on a FOSS project to be much lower than on a corporation in this case, but still better to put a little effort upfront than having to do some serious archæology later.

How to write a good copyright statement and license notice

Finally we come to the main part of this article!

A good copyright statement should consist of the following information:

  • start with the © sign;
  • the year of the first publication – a good date would be the year in which you created the file and then do not touch it anymore;
  • the name of the copyright holder – typically the author, but can also be your employer or the if there is a CLA in place another legal entity or person;
  • a valid contact to the copyright owner

As an example, this is what I would put on something I wrote today:

© 2020 Matija Šuklje <>

While you are at it, it would make a lot of sense to also notify everyone which license you are releasing your code under as well. Using an SPDX ID is a great way to unambiguously state the license of your code. (See note mentioned below for an example of how things can go wrong otherwise.)

And if you have already come so far, it is just a small step towards following the best practices as described by by using SPDX tags to make your copyright statement (marked with SPDX-FileCopyrightText) and license notice (marked with SPDX-License-Identifier and followed by an SPDX ID).

Here is now an example of a copyright statement and license notice that check all the above boxes and also complies with both the SPDX and the specifications:

SPDX-FileCopyrightText: © 2020 Matija Šuklje <>

SPDX-License-Identifier: BSD-3-Clause

Now make sure you have these in comments of all your source code files.


Over the years, I have heard many questions on this topic – both from developers and lawyers.

I will try to address them below in no particular order.

If you have a question that is not addressed here, do let me know and I will try to include it in an update.

Why keep the year?

Some might argue that for the sake of simplicity it would be much easier to maintain copyright statements if we just skip the years. In fact, that is a policy at Microsoft/GitHub at the time of this writing.

While I agree that not updating the year simplifies things enormously, I do think that keeping a date helps preserve at least a vague timeline in the codebase. As the question is when the work was first expressed in a medium, the earliest date provable is the time when that file was first created.

In addition, having an easy way to find the earliest date of a piece of code, might prove useful also in figuring out when an invention was first expressed to the general public. Something that might become useful for patent defense.

This is also why e.g. in Liferay our new policy is to write the year of the file creation, and then not change the year any more.

Innocent infringement excursion for legal geeks

17 U.S. Code § 401.(d) states that if a work carries a copyright notice in the form that the law proscribes, in a copyright infringement case the defendant cannot rely on the innocent infringement defense, except if they had reason to believe their use was covered fair use. And even then, the innocent infringer would have to be e.g. a non-profit broadcaster or archive to be still eligible to such defence.

So, if you are concerned with copyright violations (at least in USA), you may actually want to make sure your copyright statements include both the copyright sign and year of publication.

See also note in Why the © sign for how a copyright notice following the US copyright act looks like.

Why not bump the year on change?

I am sure you have seen something like this before:
Copyright (C) 1992, 1995, 2000, 2001, 2003 CompanyX Inc.

The presumption behind this is that whenever you add a new year in the copyright statement, the copyright term would start anew, and therefore prolong the time that file would be protected by copyright.

Adding a new year on every change – or, even worse, simply every 1st January – is a practice still too wide-spread even today. Unfortunately, doing this is useless at best, and misleading at worst. For the origin of this myth see the short history above.

A big problem with this approach is that not every contribution is original or substantial enough to be copyrightable – even the popular 5 (or 10, or X) SLOC rule of thumb5 is legally-speaking very debatable

So, in order to keep your copyright statement true, you would need to make a judgement call every time whether the change was substantial and original enough to be granted copyright protection by the law and therefore if the year should be bumped. And that is a substantial test for every time you change a file.

On the other hand copyright lasts at least 50 (and usually 70) years6 after the death of the author; or if the copyright holder is a legal entity (e.g. CompanyX Inc.), since publication. So the risk of your own copyright expiring under your feet is very very low.

Worst case thought experiment

Let us imagine the worst possible scenario now:

1) you never bump the year in a copyright statement in a file and 2) 50+ years after its initial release, someone copies your code as if it were in public domain. Now, if you would have issue with that and go to court, and 3) the court would (very unlikely) take only the copyright statements in that file into account as the only proof and based on that 4) rule that the code in that file would have fallen under public domain and therefore the FOSS license would not apply to it any more.

The end result would simply be that (in one jurisdiction) that file would fall into public domain and be up for grabs by anyone for anything, no copyright, no copyleft, 50+ years from the file’s creation (instead of e.g. 5, maybe 20 years later).

But, honestly, how likely is it that 50 years from now the same (unaltered) code would still be (commercially) interesting?

… and if it turns out you do need to bump the year eventually, you still have, at worst, 50 years to sort it out – so, ample opportunity to mitigate the risk.

In addition to that, as typically a single source code file is just one of the many cogs in a bigger piece of software, what you are more concerned with is the software product/project as a whole. As the software grows, you will keep adding new files, and those will obviously have newer years in them. So the codebase as a whole work will already include copyright statements with newer years in it anyway.

Keep the Git/VCS history clean

Also, bumping the year in all the files every year messes with the usefulness of the Git/VCS history, and makes the log unnecessarily long(er) and the repository consumes more space.

It makes all the files seem equally old (in years), which makes it hard to identify stale code if you are looking for it.

Another issue might be that your year-bumping script can be too trigger-happy and bump the years also in the files that do not even belong to you. Furthering misinformation both in your VCS and the files’ copyright notices.

Why not use a year range?

Similar to the previous question, the year span (e.g. 1990-2013) is basically just a lazy version of bumping the year. So all of the above-mentioned applies.

A special case is when people use a range like {$year}-present. This has almost all of the above-mentioned issues7, plus it adds another dimension of confusion, because what constitutes the “present” is an open – and potentially philosophical – question. Does it mean:

  • the time when the file was last modified?
  • the time it was released as a package?
  • the time you downloaded it (maybe for the first time)?
  • the time you ran it the last time?
  • or perhaps even the ever eluding “right now”?

As you can see, this does not help much at all. Quite the opposite!

But doesn’t Git/Mercurial keep a better track?

Not reliably.

Git (and other VCS) are good at storing metadata, but you should be careful about it.

Git does have an Author field, which is separate from the Committer field. But even if we were to assume – and that is a big assumption8 – Git’s Author was the actual author of the code committed, they may not be the copyright holder.

Furthermore, the way git blame and git diff currently work, is line-by-line and using the last change as the final author, making Git suboptimal for finding out who actually wrote what.

Token-based blame information

For a more fine-grained tool to see who to blame for which piece of code, check out cregit.

And ultimately – and most importantly – as soon as the file(s) leave the repository, the metadata is lost. Whether it is released as a tarball, the repository is forked and/or rebased, or a single file is simply copied into a new codebase, the trace is lost.

All of these issues are addressed by simply including the copyright statement and license information in every file. best practices handle this very well.

Why the © sign?

Some might argue that the English word “Copyright” is so common nowadays that everyone understands it, but if you actually read the copyright laws out there, you will find that using © (i.e. the copyright sign) is the only way to write a copyright statement that is common in copyright laws around the world9.

Using the © sign makes sense, as it is the the common global denominator.

Comparison between US and Slovenian copyright statements

As an EU example, the Slovenian ZASP §175.(1) simply states that holders of exclusive author’s rights may mark their works with a (c)/© sign in front of their name or firm and year of first publication, which can be simply put as:

© {$year_of_first_publication} {$name_of_author_or_other_copyright_holder}

On the other side of the pond, in the USA, 17 U.S. Code § 401.(b) uses more words to give a more varied approach, and relevant for this question in §401(b)(1) proscribes the use of

the symbol © (the letter C in a circle), or the word “Copyright”, or the abbreviation “Copr.”;

The rest you can go read yourself, but can be summarised as:

(©|Copyright|Copr.) {$year_of_first_publication} {$name_or_abreviation_of_copyright_holder}

See also the note in Why keep the year for why this can matter in front of USA courts.

While the © sign is a pet peeve of mine, from the practical point of view, this is the least important point here. As we established in the introduction, copyright is automatic, so the actual risk of not following the law by its letter is pretty low if you write e.g. “Copyright” instead.

Why leave a contact? Even when there is more than one author?

A contact is in no way required by copyright law, but from practical reasons can be extremely useful.

It can happen that you need to access the author and/or copyright holder of the code for legal or technical question. Perhaps you need to ask how the code works, or have a fix you want to send their way. Perhaps you found a licensing issue and want to help them fix it (or ask for a separate license). In all of these cases, having a contact helps a lot.

As pretty much all of internet still hinges on the e-mail10, the copyright holder’s e-mail address should be the first option. But anything really goes, as long as that contact is easily accessible and actually in use long-term.

Avoiding orphan works

For the legal geeks out there, a contact to the copyright holder mitigates the issue of orphan works.

There will be cases where the authorship will be very dispersed or lie with a legal entity instead. In those cases, it might be more sense to provide a URL to either the project’s or legal entity’s homepage and provide useful information there. If a project lists copyright holders in a file such as AUTHORS or CONTRIBUTORS.markdown a permalink to that file (in the master) of the publicly available repository could also be a good URL option.

How to handle multitudes of authors?

Here are two examples of what you can write in case the project (e.g. Project X) has many authors and does not have a CAA or exclusive CLA in place to aggregate the copyright in a single entity:

© 2010 The Project X Authors <{$url}>

© 1998 Contributors to the Project X <{$url}>

What about public domain?

Public domain is tricky.

In general the public domain are works to which the copyright term has expired11.

While in some jurisdictions (e.g. USA, UK) you can actually waive your copyright and dedicate your work to public domain, in most jurisdiction (e.g. most of EU member countries) that is not possible.

Which means that depending on the applicable jurisdiction, it may be that although an author wrote that they dedicate their work into public domain this does not meet the legal standard for it to actually happen – they retain the copyright in their own work.

Unsurprisingly, FOSS compliance officers and other people/projects who take copyright and licensing seriously are typically very wary of statements like “this is public domain”.

This can be mitigated in two ways:

  • instead of some generic wording, when you want to dedicate something to public domain use a tried and tested public copyright waiver / public domain dedication with a very permissive license, such as CC0-1.0; and
  • include your name and contact if you are the author in the SPDX-FileCopyrightText: field – 1) because in doubt that will associate you with your dedication to the public domain, and 2) in case anything is unclear, people have a contact to you.

This makes sense to do even for files that you deem are not copyrightable, such as config files – if you mark them as above, everyone will know that you will not exercise your author’s rights (if they existed) in those files.

It may seem a bit of a hassle for something you just released to the public to use however they see fit, without people having to ask you for permission. I get that, I truly do! But do consider that if you already put so much effort into making this wonderful stuff you and donating it to the general humanity, it would be a huge pity that, for (silly) legal details, in the end people would not (be able to) use it at all.

What about minified JS?

Modern code minifiers/uglifiers tend to have an optional flag to preserve copyright and licensing info, even when they rip out all the other comments.

The copyright does not simply go away if you minify/uglify the code, so do make sure that you use a minifier that preserves both the copyright statement as well as the license (at least its SPDX Identifier) – or better yet, the whole REUSE-compliant header.

Transformations of code

Translations between different languages, compilations and other transformations are all exclusive rights of the copyright owner. So you need a valid license even for compiling and minifying.

What is wrong with “All rights reserved”?

Often you will see “all rights reserved” in copyright statements even in a FOSS project.

The cause of this, I suspect, lies again from a copycat behaviour where people tend to simply copy what they so often found on a (music) CD or in a book. Again, the copyright law does not ask for this, even if you want to follow the fullest formal copyright statement rules.

But what it does bring, is confusion.

The statement “all rights reserved” obviously contradicts the FOSS license the same file is released under. The latter gives everyone the rights to use, study, share and improve the code, while the former states that all of these rights the author reserves to themself.

So, as those three words cause a contradiction, and do not bring anything useful to the table in the first place, you should not write them in vain.

Practical example

Imagine12 a FOSS project that has a copy of the MIT license stored in its LICENSE file and (only) the following comment at the top of all its source code files:

# This file is Copyright (C) 1997 Master Hacker, all rights reserved.

Now imagine that someone simply copies one file from that repository/archive into their own work, which is under the AGPL-3.0-only license, and this is also what it says in the LICENSE file in the root of its own repository. And you, in turn, are using this second person’s codebase.

According to the information you have at hand:

  • the copyright in the copied file is held by Master Hacker;
  • apparently, Mr Hacker reserves all the rights they have under copyright law;
  • if you felt like taking a risk, you could assume that the copied file is under the AGPL-3.0-or-later license – which is false, and could lead to copyright violation13;
  • if you wanted to play it safe, you could assume that you have no valid license to this file, so you decide to remove it and work around it – again false and much more work, but safe;
  • you could wait until 2067 and hope this actually falls under public domain by then – but who has time for that.

This example highlights both how problematic the wording of “all rights reserved” can be even if there is a license text somewhere in the codebase.

This can be avoided by using a sane copyright statement (as described in this blog post) and including an unambiguous license ID. ties both of these together in an easy to follow specification.

hook out → hat tip to the TODO Group for giving me the push to finally finish this article and Carmen Bianca Bakker for some very welcome feedback

  1. This is presumed to be the author at least initially. But depending on circumstances can be also some other person, a legal entity, a group of people etc. 

  2. A license is by definition “[t]he permission granted by competent authority to exercise a certain privilege that, without such authorization, would constitute an illegal act, a trespass or a tort.” 

  3. Limitations and exceptions (or fair use/dealings) in copyright are extremely limited when it comes to software compared to more traditional media. Do not rely on them. 

  4. In USA, the copyright statement is often called a copyright notice. The two terms are used intercheangably. 

  5. E.g. the 5 SLOC rule of thumb means that any contribution that is 5 lines or shorter, is (likely) too short to be deemed copyrightable, and therefore can be treated as un-copyrightable or as in public domain; and on the flip-side anything longer than 5 lines of code needs to be treated as copyrightable. This rule can pop up when a project has a relatively strict contribution agreement (a CLA or even CAA), but wants some leeway to accept short fix patches from drive-by contributors. The obvious problem with this is that on one hand someone can be very original even in 5 lines (think haiku), while one can also have pages and pages of absolute fluff or just plain raw factual numbers. 

  6. This depends from jurisdiction to jurisdiction. The Berne convention stipulates at least 50 years after death of the author as the baseline. There are very few non-signatory states that have shorter terms, but the majority of countries have life + 70 years. The current longest copyright term is life + 100 years, in Mexico. 

  7. The only improvement is that it avoids messing up the Git/VCS history. 

  8. In practice what the Author field in a Git repository actually includes varies quite a bit and depends on how the committer set up and used Git. 

  9. Of course, I did not go through all of the copyright laws out there, but I checked a handful of them in different languages I understand, and this is the pattern I identified. If anyone has a more thorough analysis at hand, please reach out and I will happily include it. 

  10. Just think about it, pretty much every time you create a new account somewhere online, you are asked for your e-mail address, and in general people rarely change their e-mail address. 

  11. As stated before, in most jurisdictions that is 70 years after the death of the author. 

  12. I suspect many of the readers not only can imagine one, but have seen many such projects before ;)

  13. Granted, MIT code embedded into AGPL-3.0-or-later code is less risky than vice versa. But simply imagine what it would be the other way around … or wtih an even odder combination of licenses. 

Saturday, 21 March 2020

Using LibreDNS with dnscrypt-proxy

Using DNS over HTTPS aka DoH is fairly easy with the latest version of firefox. To use libredns is just a few settings in your browser, see here. In libredns’ site, there are also instructions for DNS over TLS aka DoT.

In this blog post, I am going to present how to use dnscrypt-proxy as a local dns proxy resolver using DoH the LibreDNS noAds (tracking) endpoint. With this setup, your entire operating system can use this endpoint for everything.

Disclaimer: This blog post is about dnscrypt-proxy version 2.



dnscrypt-proxy 2 - A flexible DNS proxy, with support for modern encrypted DNS protocols such as DNSCrypt v2, DNS-over-HTTPS and Anonymized DNSCrypt.


sudo pacman -S dnscrypt-proxy

Verify Package

$ pacman -Qi dnscrypt-proxy

Name            : dnscrypt-proxy
Version         : 2.0.39-3
Description     : DNS proxy, supporting encrypted DNS protocols such as DNSCrypt v2 and DNS-over-HTTPS
Architecture    : x86_64
URL             :
Licenses        : custom:ISC
Groups          : None
Provides        : None
Depends On      : glibc
Optional Deps   : python-urllib3: for generate-domains-blacklist [installed]
Required By     : None
Optional For    : None
Conflicts With  : None
Replaces        : None
Installed Size  : 12.13 MiB
Packager        : David Runge <>
Build Date      : Sat 07 Mar 2020 08:10:14 PM EET
Install Date    : Fri 20 Mar 2020 10:46:56 PM EET
Install Reason  : Explicitly installed
Install Script  : Yes
Validated By    : Signature

Disable systemd-resolved

if necessary

$ ps -e fuwww | grep re[s]olv
systemd+     525  0.0  0.1  30944 21804 ?        Ss   10:00   0:01 /usr/lib/systemd/systemd-resolved

$ sudo systemctl stop systemd-resolved.service

$ sudo systemctl disable systemd-resolved.service
Removed /etc/systemd/system/
Removed /etc/systemd/system/dbus-org.freedesktop.resolve1.service.


It is time to configure dnscrypt-proxy to use libredns

sudo vim /etc/dnscrypt-proxy/dnscrypt-proxy.toml

In the top of the file, there is a server_names section

  server_names = ['libredns-noads']

Resolv Conf

We can now change our resolv.conf to use our local IP address.

echo -e "nameserver edns0 single-request-reopen" | sudo tee /etc/resolv.conf
$ cat /etc/resolv.conf

options edns0 single-request-reopen


start & enable dnscrypt service

sudo systemctl start dnscrypt-proxy.service

sudo systemctl enable dnscrypt-proxy.service
$ sudo ss -lntup '( sport = :domain )'

Netid  State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port  Process
udp    UNCONN  0       0*          users:(("dnscrypt-proxy",pid=55795,fd=6))
tcp    LISTEN  0       4096*          users:(("dnscrypt-proxy",pid=55795,fd=7))


$ dnscrypt-proxy -config /etc/dnscrypt-proxy/dnscrypt-proxy.toml -list
$ dnscrypt-proxy -config /etc/dnscrypt-proxy/dnscrypt-proxy.toml -resolve
Resolving []

Domain exists:  yes, 2 name servers found
Canonical name:
IP addresses:, 2a03:f80:49:158:255:214:14:80
TXT records:    v=spf1 ip4: ip6:2a03:f80:49:158:255:214:14:0/112 -all
Resolver IP: (


asking our local dns (proxy)

dig @localhost
; <<>> DiG 9.16.1 <<>> @localhost
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2449
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;                   IN      A

;; ANSWER SECTION:            7167    IN      A

;; Query time: 0 msec
;; WHEN: Sat Mar 21 19:48:53 EET 2020
;; MSG SIZE  rcvd: 56

That’s it !

Yoursystem is now using LibreDNS DoH noads endpoint.

Manual Steps

If your operating system does not yet support dnscrypt-proxy-2 then:

Latest version

You can always download the latest version from github:

To view the files

curl -sLo - $(curl -sL | jq -r '.assets[].browser_download_url | select( contains("linux_x86_64"))') | tar tzf -


To extrace the files

$ curl -sLo - $(curl -sL | jq -r '.assets[].browser_download_url | select( contains("linux_x86_64"))') | tar xzf -

$ ls -l linux-x86_64/
total 9932
-rwxr-xr-x 1 ebal ebal 10117120 Μαρ  21 13:56 dnscrypt-proxy
-rw-r--r-- 1 ebal ebal      897 Μαρ  21 13:50 example-blacklist.txt
-rw-r--r-- 1 ebal ebal     1277 Μαρ  21 13:50 example-cloaking-rules.txt
-rw-r--r-- 1 ebal ebal    20965 Μαρ  21 13:50 example-dnscrypt-proxy.toml
-rw-r--r-- 1 ebal ebal      970 Μαρ  21 13:50 example-forwarding-rules.txt
-rw-r--r-- 1 ebal ebal      439 Μαρ  21 13:50 example-ip-blacklist.txt
-rw-r--r-- 1 ebal ebal      743 Μαρ  21 13:50 example-whitelist.txt
-rw-r--r-- 1 ebal ebal      823 Μαρ  21 13:50 LICENSE
-rw-r--r-- 1 ebal ebal     2807 Μαρ  21 13:50 localhost.pem

$ cd linux-x86_64/

Prepare the configuration

$ cp example-dnscrypt-proxy.toml dnscrypt-proxy.toml
$ vim dnscrypt-proxy.toml

In the top of the file, there is a server_names section

  server_names = ['libredns-noads']
$ ./dnscrypt-proxy -config dnscrypt-proxy.toml --list
[2020-03-21 19:27:20] [NOTICE] dnscrypt-proxy 2.0.40
[2020-03-21 19:27:20] [NOTICE] Network connectivity detected
[2020-03-21 19:27:22] [NOTICE] Source [public-resolvers] loaded
[2020-03-21 19:27:23] [NOTICE] Source [relays] loaded

Run as root

$ sudo ./dnscrypt-proxy -config ./dnscrypt-proxy.toml
[sudo] password for ebal: *******

[2020-03-21 20:11:04] [NOTICE] dnscrypt-proxy 2.0.40
[2020-03-21 20:11:04] [NOTICE] Network connectivity detected
[2020-03-21 20:11:04] [NOTICE] Source [public-resolvers] loaded
[2020-03-21 20:11:04] [NOTICE] Source [relays] loaded
[2020-03-21 20:11:04] [NOTICE] Firefox workaround initialized
[2020-03-21 20:11:04] [NOTICE] Now listening to [UDP]
[2020-03-21 20:11:04] [NOTICE] Now listening to [TCP]
[2020-03-21 20:11:04] [NOTICE] [libredns-noads] OK (DoH) - rtt: 65ms
[2020-03-21 20:11:04] [NOTICE] Server with the lowest initial latency: libredns-noads (rtt: 65ms)
[2020-03-21 20:11:04] [NOTICE] dnscrypt-proxy is ready - live servers: 1

Check DNS

Interesting enough, first time is 250ms , second time is zero!

$ dig

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53609
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;   IN  A

;; ANSWER SECTION:    2399  IN  A

;; Query time: 295 msec
;; WHEN: Sat Mar 21 20:12:52 EET 2020
;; MSG SIZE  rcvd: 72

$ dig

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31159
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
; IN  A


;; Query time: 0 msec
;; WHEN: Sat Mar 21 20:12:56 EET 2020
;; MSG SIZE  rcvd: 72

That’s it

Thursday, 19 March 2020

Tools I use daily the Win10 edition

almost three (3) years ago I wrote an article about the Tools I use daily. But for the last 18 months (or so), I am partial using windows 10 due to my new job role, thus I would like to write an updated version on that article.


I’ ll try to use the same structure for comparison as the previous article, keep in mind this a nine to five setup (work related). So here it goes.



NOTICE beer is just for decor ;)

Operating System

I use Win10 as my primary operating system in my worklaptop. I have a couple of impediments that can not work on a linux distribution but I am not going to bother you with them (it’s webex and some internal internet-explorer only sites).

We used to use webex as our primary communication tool. We are sharing our screen and have our video camera on, so that everybody can see each other.Working with remote teams, it’s kind of nice to see the faces of your coworkers. A lot of meetings are integrated with the company’s outlook. I use OWA (webmail) as an alternative but in fact it is still difficult to use both of them with a linux desktop.

We successful switched to slack for text communications, video calls and screen sharing. This choice gave us a boost in productivity as we are now daily using slack calls to align with each other. Although still webex is in the mix. Company is now using a newer webex version that works even better with browser support so that is a plus. It’s not always easy to get everybody with a webex license but as long as we are using slack it is okay. Only problem with slack in linux is when working with multiple monitors, you can not choose which monitor to share.

I have considered to use a VM (virtual machine) but a win10 vm needs more than 4G of RAM and a couple of CPUs just to boot up. In that case, it means that I have to reduce my work laptop resources for half the day, every day. So for the time being I am staying with Win10 as the primary operating system. I have to use the winVM for some other internal works but it is limited time.



Default Win10 desktop

I daily use these OpenSource Tools:

  • AutoHotkey for keyboard shortcut (I like switching languages by pressing capslock)
  • Ditto as clipboard manager
  • Greenshot for screenshot tool

and from time to time, I also use:

except plumb, everything else is opensource!

So I am trying to have the same user desktop experience as in my Linux desktop, like my language swith is capslock (authotkey), I dont even think about it.


Disk / Filesystem

Default Win10 filesystem with bitlocker. Every HW change will lock the entire system. In the past this happened twice with a windows firmware device upgrade. Twice!

Dropbox as a cloud sync software, with EncFSMP partition and syncthing for secure personal syncing files.

(same setup as linux, except bitlocker is luks)



OWA for calendar purposes and … still Thunderbird for primary reading mails.

Thunderbird 68.6.0 AddOns:

(same setup as linux)



Windows Subsystem for Linux aka WSL … waiting for the official WSLv2 ! This is a huge HUGE upgrade for windows. I have setup an Arch Linux WSL environment to continue work on a linux environment, I mean bash. I use my WSL archlinux as a jumphost to my VMs.


Terminal Emulator

  • Mintty The best terminal emulator for WSL. Small, not to fancy, just works, beautiful, love it.



Using Visual Studio Code for scripting. vim within WSL and notepad for temporary text notes. I have switched to Boostnote for markdown and as my primary note editor.

(same setup as linux)



Multiple Instances of Firefox, Chromium, Tor Browser and brave

Primary Browser: Firefox
Primary Private Browsing: Brave

(same setup as linux)



I use mostly Slack and Signal Desktop. We are using webex but I prefer Zoom. Riot/Matrix for decentralized groups and IRC bridge. To be honest, I also use Viber & messanger (only through webbrowser).

(same setup as linux - minus the Viber client)



VLC for windows, what else ? Also GIMP for image editing. I have switched to Spotify for music and draw io for diagrams. Last, I use CPod for podcasts. Netflix (sometimes).

(same setup as linux)


In conclusion

I have switched to a majority of electron applications. I use the same applications on my Linux boxes. Encrypted notes on boostnote, synced over syncthing. Same browsers, same bash/shell, the only thing I dont have on my linux boxes are webex and outlook. Consider everything else, I think it is a decent setup across every distro.


Thanks for reading my post.

Tag(s): win10

Tuesday, 17 March 2020

How to write your Pelican-powered blog using ownCloud and WebDAV

Originally this HowTo was part of my last post – a lengthy piece about how I migrated my blog to Pelican. As this specific modification might be more interesting than reading the whole thing, I decided to fork and extend it.

What and why?

What I was trying to do is to be able to add, edit and delete content from Pelican from anywhere, so whenever inspiration strikes I can simply take out my phone or open up a web browser and create a rough draft. Basically a make-shift mobile and desktop blogging app.

I decided to that the easiest this to do this by accessing my content via WebDAV via ownCloud that runs on the same server.

Works also on Nextcloud

As an update a few years after I wrote this blog post, I have since migrated from ownCloud to Nextcloud and it all still works the same way.

Why not Git and hooks?

The answer is quite simple: because I do not need it and it adds another layer of complication.

I know many use Git and its hooks to keep track of changes as well as for backups and for pushing from remote machines onto the server. And that is a very fine way of running it, especially if there are several users committing to it.

But for the following reasons, I do not need it:

  • I already include this page with its MarkDown sources, settings and the HTML output in my standard RSnapshot backup scheme of this server, so no need for that;
  • I want to sometimes draft my posts on my mobile and Git and Vim on a touch-screen are just annoying to use;
  • this is a personal blog, so the distributed VCS side of Git is just an overhead really;
  • there is no added benefit to sharing the MarkDown sources on-line, if all the HTML sources are public anyway.

Setting up the server

Pairing up Pelican and ownCloud

In ownCloud it is very easy to mount external storage, and a folder local to the server is still considered “extrenal” as it is outside of ownCloud. Needless to say, there is a nice GUI for that.

Once you open up the Admin page in ownCloud, you will see the External Storage settings. For security reasons only admins can mount a local folder, so if you aren’t one, you will not see Local as an option and you will have to ask your friendly ownCloud sysAdmin to add the folder from his Admin page for you.

If that is not an option, on a GNU/Linux server there is an easy, yet hackish solution as well: just link Pelican’s content folder into your ownCloud user’s file system – e.g:

ln -s /var/www/ /var/www/owncloud/htdocs/data/hook/files/Blog

In order to have the files writeable over WebDAV, they need to have write permission from the user that PHP and web-server are running under – e.g.:

chown -R nginx:nginx /var/www/owncloud/htdocs/data/hook/files/Blog/

Automating page generation and ownership

To have pages constantly automatically generated, there is a option to call pelican --autoreload and I did consider turning it into an init script, but decided against it for two reasons:

  • it consumes too much CPU power just to check for changes;
  • as on my poor ARM server a full (re-)generation of this blog takes about 6 minutes2, I did not want to hammer my system for every time I save a minor change.

What I did instead was to create an fcronjob to (re-)generate the website every night at 3 in the morning (and send a mail to root’s default address), under the condition that there blog posts have either been changed in content or added since yesterday:

%nightly,mail * 3 cd /var/www/ && posts=(content/**/*.markdown(Nm-1)); if (( $#posts )) LC_ALL="en_GB.utf8" make html

Update: the above command is changed to use Zsh; for the old sh version, use:

%nightly,mail * 3 cd /var/www/ && [[ `find content -iname "*.markdown" -mtime -1` != "" ]] && LC_ALL="en_GB.utf8" make html

In order to have the file permissions on the content directory always correct for ownCloud (see above), I changed the Makefile a bit. The relevant changes can be seen below:

    chown -R nginx:nginx $(INPUTDIR)

    [ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)

    chown -R nginx:nginx $(INPUTDIR)

E-mail draft reminder

Not directly relevant, but still useful.

In order not to forget any drafts unattended, I have also set up an FCron job to send me an e-mail with a list of all unfinished drafts to my private address.

It is a very easy hack really, but I find it quite useful to keep track of things – find the said fcronjob below:

%midweekly,mailto( * * cd /var/www/ && ack "Status: draft"

Client software


As a mobile client I plan to use ownNotes, because it runs on my Nokia N91 and supports MarkDown highlighting out-of-the-box.

All I needed to do in ownNotes is to provide it with my ownCloud log-in credentials and state Blog as the "Remote Folder Name" in the preferences.

But before I can really make use of ownNotes, I have to wait for it to starts using properly managing file-name extensions.

ownCloud web interface

Since ownCloud includes a webGUI text editor with MarkDown highlighting out of the box, I sometimes use that as well.

An added bonus is that the Activity feed of ownCloud keeps a log of when which file changed or was added.

It does not seem possible yet to collaboratively edit files other than ODT in ownCloud’s webGUI, but I imagine that might be the case in the future.

Kate via WebDAV

In many other desktop environments it is child’s play to add a WebDAV remote folder — just adding a link to the file manager should be enough, e.g.: webdavs://

KDE’s Dolphin makes it easier for you, because all you have to do is select RemoteAdd remote folder and if you already have a connection to your ownCloud with some other service (e.g. Zanshin and KOrganizer for WebCal), it will suggest all the details to you, if you choose Recent connection.

Once you have the remote folder added, you can use it transparently all over KDE. So when you open up Kate, you can simply navigate the remote WebDAV folders, open up the files, edit and save them as if they were local files. It really is as easy as that! ☺


I probably could have also used the more efficient KIO FISH, but I have not bothered with setting up a more complex permission set-up for such a small task. For security reasons it is not possible to log in via SSH using the same user the web server runs under.

SSH and Vim

Of course, it is also possible to ssh to the web server, su to the correct user, edit the files with Vim and let FCron and Make file make sure the ownership is done appropriately.

hook out → back to studying Arbitration law

  1. Yes, I am well aware you can run Vim and Git on MeeGo Harmattan and I do use it. But Vim on a touch-screen keyboard is not very fun to use for brainstorming. 

  2. At the time of writing this blog includes 343 articles and 2 pages, which took Pelican 440 seconds to generate on my poor little ARM server (on a normal load). 

Monday, 16 March 2020

Install Jitsi-Meet alongside ejabberd

Since the corona virus is forcing many of us into home office there is a high demand for video conference solutions. A popular free and open source tool for creating video conferences similar to Google’s hangouts is Jitsi Meet. It enables you to create a conference room from within your browser for which you can then share a link to your coworkers. No client software is needed at all (except mobile devices).

The installation of Jitsi Meet is super straight forward – if you have a dedicated server sitting around. Simply add the jitsi repository to your package manager and (in case of debian based systems) type

sudo apt-get install jitsi-meet

The installer will guide you through most of the process (setting up nginx / apache, installing dependencies, even do the letsencrypt setup) and in the end you can start video calling! The quick start guide does a better job explaining this than I do.

Jitsi Meet is a suite of different components that all play together (see Jitsi Meet manual). Part of the mix is a prosody XMPP server that is used for signalling. That means if you want to have the simple easy setup experience, your server must not already run another XMPP server. Otherwise you’ll have to do some manual configuration ahead of you.

I did that.

Since I already run a personal ejabberd XMPP server and don’t have any virtualization tools at hands, I wanted to make jitsi-meet use ejabberd instead of prosody. In the end both should be equally suited for the job.

Looking at the prosody configuration file that comes with Jitsi’s bundled prosody we can see that Jitsi Meet requires the XMPP server to serve two different virtual hosts.
The file is located under /etc/prosody/conf.d/

VirtualHost ""
        authentication = "anonymous"
        ssl = {
        modules_enabled = {
        c2s_require_encryption = false

Component "" "muc"
    storage = "memory"
admins = { "" }

Component ""
    component_secret = "SECRET1"

VirtualHost ""
    ssl = {
    authentication = "internal_plain"

Component ""
    component_secret = "SECRET2"

Remember to replace SECRET1 and SECRET2 with secure secrets! There are also some external components that need to be configured. This is where Jitsi Meet plugs into the XMPP server.

In my case I don’t want to server 3 virtual hosts with my ejabberd, so I decided to replace with my already existing main domain which already uses internal authentication. So all I had to do is to add the virtual host to my ejabberd.yml.
The ejabberd config file is located under /etc/ejabberd/ejabberd.yml or /opt/ejabberd/conf/ejabberd.yml depending on your ejabberd distribution.

    ## serves as main host, as well as for focus user
  - ""
    ## serves as anonymous authentication host for
  - ""

The syntax for external components is quite different for ejabberd than it is for prosody, so it took me some time to get it working.

    port: 5275
    ip: "::"
    module: ejabberd_service
    access: all
    shaper: fast
        password: "SECRET1"
    port: 5280
    ip: "::"
    module: ejabberd_http
      "/http-bind": mod_bosh
    tls: true
    protocol_options: 'TLS_OPTIONS'
    port: 5347
    module: ejabberd_service
        password: "SECRET2"

By re-reading the config files now, I wonder why I ended up placing the focus component under the host and not, but hey – it works and I’m too scared to touch it again đŸ˜›

The configuration of the modules was a bit trickier on ejabberd, as the ejabberd config syntax seems to disallow duplicate entries. I ended up moving everything from the existing main modules: block into a separate host_config: for my existing domain. That way I could separate the configuration of my main domain from the config of the meet subdomain.

  ## Already existing vhost.
    s2s_access: s2s
    ## former main modules block, now further indented
      mod_adhoc: {}
      mod_admin_extra: {}

  ## New meeting host with anonymous authentication and no s2s
    ## Disable s2s to prevent spam
    s2s_access: none
    auth_method: anonymous
    allow_multiple_connections: true
    anonymous_protocol: both
      mod_bosh: {}
      mod_disco: {}
        host: "conference.@HOST@"
        access: all
        access_create: local
        access_persistent: local
        access_admin: admin
      mod_muc_admin: {}
      mod_ping: {}
        access_createnode: local

As you can see I only enabled required modules for the service and even disabled s2s to prevent the anonymous Jitsi Meet users from contacting users on other servers.

Last but not least we have to add the focus user as an admin and also generate (not discussed here) and add certificates for the subdomain. This step is not necessary if the meet domain is already covered by the certificate in use.

  - ...
  - "/etc/ssl/"
  - "/etc/ssl/"
  - "/etc/ssl/"
      - ""

That’s it for the ejabberd configuration. Now we have to configure the other Jitsi Meet components. Lets start with jicofo, the Jitsi Conference Focus component.

My /etc/jitsi/jicofo/config file looks as follows.
# Below can be left as is.

Respectively the videobridge configuration (/etc/jitsi/videobridge/config) looks like this:
## Leave below as originally was

Some changes had to be made to /etc/jitsi/videobridge/*<LOCAL-IP-OF-YOUR-SERVER><PUBLIC-IP-OF-YOUR-SERVER>

Now we can wire it all together by modifying the Jitsi Meet config file found under /etc/jitsi/meet/

var config = {
    hosts: {
        domain: '',
        anonymousdomain: '',
        authdomain: '',
        bridge: '',
        focus: '',
        muc: ''
    bosh: '//',
    clientNode: '',
    focusUserJid: '',

    testing: {

Finally of course, I also had to register the focus user as an XMPP account:

ejabberdctl register focus SECRET3

Remember to use a safe password instead of SECRET3 and also stop and disable the bundled prosody! That’s it!

I hope this lowers the bar for some to deploy Jitsi Meet next to their already existing ejabberd. Lastly please do not ask me for support, as I barely managed to get this working for myself đŸ˜›

Update (11.04.2020)

With feedback from Holger I reworked my ejabberd config and disabled s2s for the meet vhost, see above.

Someone also pointed out that it may be a good idea to substitute prosody with a dummy package to save disk space and possible attack surface.

Sunday, 15 March 2020

restic with minio

restic is a fast, secure & efficient backup program.

I wanted to test restic for some time now. It is a go backup solution, I would say similar to rclone but it has a unique/different design. I prefer having an isolated clean environment when testing software, so I usually go with a VΜ. For this case, I installed elementary OS v5.1, an ubuntu LTS based distro focus on user experience. As backup storage solution, I used MinIO an S3 compatible object storage on the same VM. So here are my notes on restic and in the end of this article you will find how I setup minion.

Be aware this is a technical post!


Most probably your distro package manager has already restic in their repositories.

pacman -S restic


apt -y install restic

download latest version

But just in case you want to install the latest binary version, you can use this command

curl -sLo - $(curl -sL | jq -r '.assets[].browser_download_url | select( contains("linux_amd64"))') \
  | bunzip2 - | sudo tee /usr/local/bin/restic > /dev/null

sudo chmod +x /usr/local/bin/restic

or if you are already root

curl -sLo - $(curl -sL | jq -r '.assets[].browser_download_url | select( contains("linux_amd64"))') \
  | bunzip2 - > /usr/local/bin/restic

chmod +x /usr/local/bin/restic

we can see the latest version

$ restic version
restic 0.9.6 compiled with go1.13.4 on linux/amd64


Enable autocompletion

sudo restic generate --bash-completion /etc/bash_completion.d/restic

restart your shell.

Prepare your repo

We need to prepare our destination repository. This is our backup endpoint. restic can save multiple snapshots for multiple hosts on the same endpoint (repo).

Apart from the files stored within the keys directory, all files are encrypted with AES-256 in counter mode (CTR). The integrity of the encrypted data is secured by a Poly1305-AES message authentication code (sometimes also referred to as a “signature”).

To access a restic repo, we need a key. We will use this key as password (or passphrase) and it is really important NOT to lose this key.

For automated backups (or scripts) we can use the environmental variables of our SHELL to export the password. It is best to export the password through a script or even better through a password file.

export -p RESTIC_PASSWORD=<our key>
export -p RESTIC_PASSWORD_FILE=<full path of 0400 file>


export -p RESTIC_PASSWORD=55C9225pXNK3s3f7624un

We can also declare the restic repository through an environmental variable

export -p RESTIC_REPOSITORY=<our repo>

Local Repo

An example of local backup repo should be something like this:

$ cat restic.local.conf
export -p RESTIC_PASSWORD=55C9225pXNK3s3f7624un
export -p RESTIC_REPOSITORY="/mnt/backup/"

minio S3

We are going to use minio as an S3 object storage, so we need to export the Access & Sercet Key in a similar way as for amazon S3.

export -p AWS_ACCESS_KEY_ID=minioadmin
export -p AWS_SECRET_ACCESS_KEY=minioadmin

The S3 endpoint is http://localhost:9000/demo so a full example should be:

$ cat restic.S3.conf

export -p AWS_ACCESS_KEY_ID=minioadmin
export -p AWS_SECRET_ACCESS_KEY=minioadmin

export -p RESTIC_PASSWORD=55C9225pXNK3s3f7624un
export -p RESTIC_REPOSITORY="s3:http://localhost:9000/demo"

source the config file into your shell:

source restic.S3.conf

Initialize Repo

We are ready to initialise the remote repo

$ restic init
created restic repository f968b51633 at s3:http://localhost:9000/demo

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Be Careful if you asked to type a password, that means that you did not use a shell environmental variable to export a password. That is fine, but only if that was your purpose. Then you will see something like that:

$ restic init

enter password for new repository: <type your password here>
enter password again: <type your password here, again>

created restic repository ea97171d56 at s3:http://localhost:9000/demo

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
enter password for new repository:
enter password again:
created restic repository ea97171d56 at s3:http://localhost:9000/demo

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.


We are ready to take our first snapshot.

$ restic -v backup /home/ebal/

open repository
repository c8d9898b opened successfully, password is correct
created new cache in /home/ebal/.cache/restic
lock repository
load index files
start scan on [/home/ebal/]
start backup on [/home/ebal/]
scan finished in 0.567s: 2295 files, 307.823 MiB

Files:        2295 new,     0 changed,     0 unmodified
Dirs:            1 new,     0 changed,     0 unmodified
Data Blobs:   2383 new
Tree Blobs:      2 new
Added to the repo: 263.685 MiB

processed 2295 files, 307.823 MiB in 0:28
snapshot 33e8ae0d saved

You can exclude or include files with restic, but I will not get into this right now.
For more info, read Restic Documentation

standard input

restic can also take for backup:

mysqldump --all-databases -uroot -ppassword | xz - | restic --stdin --stdin-filename mysqldump.sql.bz2


$ restic -v check

using temporary cache in /tmp/restic-check-cache-528400534
repository c8d9898b opened successfully, password is correct
created new cache in /tmp/restic-check-cache-528400534
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

Take another snapshot

$ restic -v backup /home/ebal/ --one-file-system  --cleanup-cache

open repository
repository c8d9898b opened successfully, password is correct
lock repository
load index files
using parent snapshot 33e8ae0d
start scan on [/home/ebal/]
start backup on [/home/ebal/]
scan finished in 0.389s: 2295 files, 307.824 MiB

Files:           0 new,     4 changed,  2291 unmodified
Dirs:            0 new,     1 changed,     0 unmodified
Data Blobs:      4 new
Tree Blobs:      2 new
Added to the repo: 154.549 KiB

processed 2295 files, 307.824 MiB in 0:01
snapshot 280468f6 saved

List snapshots

$ restic -v snapshots

repository c8d9898b opened successfully, password is correct
ID        Time                 Host        Tags        Paths
6988dda7  2020-03-14 23:32:55  elementary              /etc
33e8ae0d  2020-03-15 21:05:55  elementary              /home/ebal
280468f6  2020-03-15 21:08:38  elementary              /home/ebal
3 snapshots

Remove snapshot

as you can see, I had one more snapshot before my home dir and I want to remove it

$ restic -v forget 6988dda7

repository c8d9898b opened successfully, password is correct
removed snapshot 6988dda7

list again

$ restic -v snapshots

repository c8d9898b opened successfully, password is correct
ID        Time                 Host        Tags        Paths
33e8ae0d  2020-03-15 21:05:55  elementary              /home/ebal
280468f6  2020-03-15 21:08:38  elementary              /home/ebal
2 snapshots

Compare snapshots

$ restic -v diff 33e8ae0d 280468f6

repository c8d9898b opened successfully, password is correct
comparing snapshot 33e8ae0d to 280468f6:

M    /home/ebal/.config/dconf/user
M    /home/ebal/.mozilla/firefox/pw9z9f9z.default-release/SiteSecurityServiceState.txt
M    /home/ebal/.mozilla/firefox/pw9z9f9z.default-release/datareporting/aborted-session-ping
M    /home/ebal/.mozilla/firefox/pw9z9f9z.default-release/storage/default/moz-extension+++62b23386-279d-4791-8ae7-66ab3d69d07d^userContextId=4294967295/idb/3647222921wleabcEoxlt-eengsairo.sqlite

Files:           0 new,     0 removed,     4 changed
Dirs:            0 new,     0 removed
Others:          0 new,     0 removed
Data Blobs:      4 new,     4 removed
Tree Blobs:     14 new,    14 removed
  Added:   199.385 KiB
  Removed: 197.990 KiB

Mount a snapshot

$ mkdir -p backup

$ restic -v mount backup/

repository c8d9898b opened successfully, password is correct
Now serving the repository at backup/
When finished, quit with Ctrl-c or umount the mountpoint.

open another terminal

$ cd backup/

$ ls -l
total 0
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 hosts
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 ids
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 snapshots
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 tags

$ ls -l hosts/
total 0
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 elementary

$ ls -l snapshots/
total 0
dr-xr-xr-x 3 ebal ebal 0 Μαρ  15 21:05 2020-03-15T21:05:55+02:00
dr-xr-xr-x 3 ebal ebal 0 Μαρ  15 21:08 2020-03-15T21:08:38+02:00
lrwxrwxrwx 1 ebal ebal 0 Μαρ  15 21:08 latest -> 2020-03-15T21:08:38+02:00

$ ls -l tags
total 0

So as we can see, snapshots are based on time.

$ du -sh snapshots/*

309M  snapshots/2020-03-15T21:05:55+02:00
309M  snapshots/2020-03-15T21:08:38+02:00
0     snapshots/latest

be aware as far as we have mounted the restic backup, there is a lock on the repo.
Do NOT forget to close the mount point when finished.

When finished, quit with Ctrl-c or umount the mountpoint.
  signal interrupt received, cleaning up

Check again

you may need to re-check to see if there is a lock on the repo

$ restic check

using temporary cache in /tmp/restic-check-cache-524606775
repository c8d9898b opened successfully, password is correct
created new cache in /tmp/restic-check-cache-524606775
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

Restore a snapshot

Identify which snapshot you want to restore

$ restic snapshots

repository c8d9898b opened successfully, password is correct
ID        Time                 Host        Tags        Paths
33e8ae0d  2020-03-15 21:05:55  elementary              /home/ebal
280468f6  2020-03-15 21:08:38  elementary              /home/ebal
2 snapshots

create a folder and restore the snapshot

$ mkdir -p restore
$ restic -v restore 280468f6 --target restore/

repository c8d9898b opened successfully, password is correct
restoring <Snapshot 280468f6 of [/home/ebal] at 2020-03-15 21:08:38.10445053 +0200 EET by ebal@elementary> to restore/
$ ls -l restore/
total 4
drwxr-xr-x 3 ebal ebal 4096 Μαρ  14 13:56 home

$ ls -l restore/home/
total 4
drwxr-xr-x 17 ebal ebal 4096 Μαρ  15 20:13 ebal

$ du -sh restore/home/ebal/
287M  restore/home/ebal/

List files from snapshot

$ restic -v ls 280468f6 | head
snapshot 280468f6 of [/home/ebal] filtered by [] at 2020-03-15 21:08:38.10445053 +0200 EET):



$ restic key list

repository ea97171d opened successfully, password is correct
 ID        User  Host        Created
*8c112442  ebal  elementary  2020-03-14 23:22:49

restic rotate snapshot policy

a few more words about forget

Forget mode has a feature of keep last TIME snapshots, where time can be

  • number of snapshots
  • hourly
  • daily
  • weekly
  • monthly
  • yearly

and makes restic with local feature an ideally replacement for rsnapshot!

$ restic help forget

The "forget" command removes snapshots according to a policy. Please note that
this command really only deletes the snapshot object in the repository, which
is a reference to data stored there. In order to remove this (now unreferenced)
data after 'forget' was run successfully, see the 'prune' command.

  -l, --keep-last n            keep the last n snapshots
  -H, --keep-hourly n          keep the last n hourly snapshots
  -d, --keep-daily n           keep the last n daily snapshots
  -w, --keep-weekly n          keep the last n weekly snapshots
  -m, --keep-monthly n         keep the last n monthly snapshots
  -y, --keep-yearly n          keep the last n yearly snapshots

Appendix - minio

MinIO is a s3 compatible object storage.

install server

sudo curl -sLo /usr/local/bin/minio \

sudo chmod +x /usr/local/bin/minio

minio --version
minio version RELEASE.2020-03-14T02-21-58Z

run server

minio server ./data
AccessKey: minioadmin
SecretKey: minioadmin

Browser Access:

Command-line Access:
   $ mc config host add myminio minioadmin minioadmin

Object API (Amazon S3 compatible):
Detected default credentials 'minioadmin:minioadmin',
please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'


create demo bucket





install client

sudo curl -sLo /usr/local/bin/mc

sudo chmod +x /usr/local/bin/mc

mc -v
mc version RELEASE.2020-03-14T01-23-37Z

configure client

mc config host add myminio minioadmin minioadmin

run mc client

$ mc ls myminio
[2020-03-14 19:01:25 EET]      0B demo/

$ mc tree myminio/demo

mc autocompletion

mc --autocompletion

you need to restart your shell.

$ mc ls myminio/demo/

[2020-03-15 21:03:15 EET]    155B config
[2020-03-15 21:34:13 EET]      0B data/
[2020-03-15 21:34:13 EET]      0B index/
[2020-03-15 21:34:13 EET]      0B keys/
[2020-03-15 21:34:13 EET]      0B snapshots/

That’s It!

Tag(s): restic, minio

20.04 releases branches created

Make sure you commit anything you want to end up in the 20.04 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 19 of March.

More interesting dates
April 2: 20.04 RC (20.03.90) Tagging and Release
April 16: 20.04 Tagging
April 23: 20.04 Release

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

        Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog – Think. Innovation.  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku —  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog