Free Software, Free Society!
Thoughts of the FSFE Community (English)

Wednesday, 21 July 2021

wireguard

WireGuard: fast, modern, secure VPN tunnel. WireGuard securely encapsulates IP packets over UDP.

Goal

What I would like to achieve, in this article, is to provide a comprehensive guide for a redirect-gateway vpn using wireguard with a twist. The client machine should reach internet through the wireguard vpn server. No other communications should be allowed from the client and that means if we drop the VPN connection, client can not go to the internet.

wireguard.png

Intro - Lab Details

Here are my lab details. This blog post will help you understand all the necessary steps and provide you with a guide to replicate the setup. You should be able to create a wireguard VPN server-client between two points. I will be using ubuntu 20.04 as base images for both virtual machines. I am also using LibVirt and Qemu/KVM running on my archlinux host.

wireguard_vms.png

Wireguard Generate Keys

and the importance of them!

Before we begin, give me a moment to try explaining how the encryption between these two machines, in a high level design works.

Each linux machines creates a pair of keys.

  • Private Key
  • Public Key

These keys have a unique relationship. You can use your public key to encrypt something but only your private key can decrypt it. That mean, you can share your public keys via an cleartext channel (internet, email, slack). The other parties can use your public key to encrypt traffic towards you, but none other decrypt it. If a malicious actor replace the public keys, you can not decrypt any incoming traffic thus make it impossible to connect to VPN server!

wireguard_keys.png

Public - Private Keys

Now each party has the other’s public keys and they can encrypt traffic for each other, BUT only you can decrypt your traffic with your private key. Your private key never leaves your computer.

wireguard_traffic.png

Hopefully you get the idea.

Wireguard Server - Ubuntu 20.04 LTS

In order to make the guide easy to follow, I will start with the server part.

Server Key Generation

Generate a random private and public key for the server as-root

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

Make the keys read-only

chmod 0400 /etc/wireguard/*key

List keys

ls -l /etc/wireguard/
total 8
-r-------- 1 root root 45 Jul 12 20:29 privatekey
-r-------- 1 root root 45 Jul 12 20:29 publickey

UDP Port & firewall

In order to run a VPN service, you need to choose a random port that wireguard VPN server can listen to it.

eg.

61194

open firewall

ufw allow 61194/udp
Rule added
Rule added (v6)

view more info

ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To             Action      From
--             ------      ----
22/tcp         ALLOW IN    Anywhere
61194          ALLOW IN    Anywhere
22/tcp (v6)    ALLOW IN    Anywhere (v6)
61194 (v6)     ALLOW IN    Anywhere (v6)

So you see, only SSH and our VPN ports are open.
Default Incoming Policy is: DENY.

Wireguard Server Configuration File

Now it is time to create a basic configuration wireguard-server file.

Naming the wireguard interface

Clarification the name of our interface does not matter, you can choose any name. Gets it’s name from the configuration filename! But in order to follow the majority of guides try to use something from wg+. For me it is easier to name wireguard interface:

  • wg0

but you may seen it also as

  • wg

without a number. All good!

Making a shell script for wireguard server configuration file

Running the below script will create a new configuration file /etc/wireguard/wg0.conf

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address = ${NETWORK}.1/24
ListenPort = 61194
PrivateKey = ${PRIVATE_KEY}

EOF

ls -l /etc/wireguard/wg0.conf
cat   /etc/wireguard/wg0.conf

Note I have chosen the network 10.0.8.0/24 for my VPN setup, you can choose any Private Network

10.0.0.0/8     - Class A
172.16.0.0/12  - Class B
192.168.0.0/16 - Class C

I chose a Class C (256 IPs) from a /8 (Class A) private network, do not be confused about this. It’s just a Class-C /24 private network.

Let’s make our first test with the server

wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0

It’s Alive!

Kernel Module

Verify that wireguard is loaded as a kernel module on your system.

lsmod | grep wireguard
wireguard             212992  0
ip6_udp_tunnel         16384  1 wireguard
udp_tunnel             16384  1 wireguard

Wireguard is in the Linux kernel since March 2020.

Show IP address

ip address show dev wg0
3: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420
qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.0.8.1/24 scope global wg0
       valid_lft forever preferred_lft forever

Listening Connections

Verify that wireguard server listens to our specific UDP port

ss -nulp '( sport = :61194 )' | column -t
State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port  Process
UNCONN  0       0           0.0.0.0:61194       0.0.0.0:*
UNCONN  0       0              [::]:61194          [::]:*

Show wg0

wg show wg0
interface: wg0
  public key: <public key>
  private key: (hidden)
  listening port: 61194

Show Config

wg showconf wg0
[Interface]
ListenPort = 61194
PrivateKey = <private key>

Close wireguard

wg-quick down wg0

What is IP forwarding

In order for our VPN server to forward our client’s network traffic from the internal interface wg0 to it’s external interface and on top of that, to separate it’s own traffic from client’s traffic, the vpn server needs to masquerade all client’s traffic in a way that knows who to forward and send back traffic to the client. In a nuthshell this is IP forwarding and perhaps the below diagram can explain it a little better.

wireguard_forwarding.png

To do that, we need to enable the IP forward feature on our linux kernel.

sysctl -w net.ipv4.ip_forward=1

The above command does not persist the change across reboots. To persist this setting we need to add it, to sysctl configuration. And although many set this option global, we do not need to have to do that!

Wireguard provides four (4) stages, to run our own scripts.

Wireguard stages

  • PreUp
  • PostUp
  • PreDown
  • PostDown

So we can choose to enable IP forwarding at wireguard up and disable it on wireguard down.

IP forwarding in wireguard configuration file

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8

cat > /etc/wireguard/wg0.conf <<EOF

[Interface]
Address    = ${NETWORK}.1/24
ListenPort = 61194
PrivateKey = ${PRIVATE_KEY}

PreUp    = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0
EOF

verify above configuration

wg-quick up wg0
[#] sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0

Verify system control setting for IP forward:

sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Close wireguard

wg-quick down wg0
[#] wg showconf wg0
[#] ip link delete dev wg0
[#] sysctl -w net.ipv4.ip_forward=0
net.ipv4.ip_forward = 0

Verify that now IP Forward is not enabled.

sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0

Routing via firewall rules

Next on our agenda, is to add some additional rules on our firewall. We already enabled IP Forward feature with the above and now it is time to update our firewall so it can masquerade our client’s network traffic.

Default Route

We need to identify the default external network interface on the ubuntu server.

DEF_ROUTE=$(ip -4 -json route list default | jq  -r .[0].dev)

usually is eth0 or ensp3 or something similar.

firewall rules

  1. We need to forward traffic from the internal interface to the external interface
  2. We need to masquerade the traffic of the client.
iptables -A FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT

iptables -t nat -A POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE

and also we need to drop these rules when we stop wireguard service

iptables -D FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT

iptables -t nat -D POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE

See the -D after iptables.

iptables is a CLI (command line interface) for netfilters. So think the above rules as filters on our network traffic.

WG Server Conf

To put everything all together and also use wireguard PostUp and PreDown to

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

chmod 0400 /etc/wireguard/*key

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8
DEF_ROUTE=$(ip -4 -json route list default | jq  -r .[0].dev)
WG_PORT=61194

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address    = ${NETWORK}.1/24
ListenPort = ${WG_PORT}
PrivateKey = ${PRIVATE_KEY}

PreUp    = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

PostUp  = iptables -A FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT ; iptables -t nat -A POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE
PreDown = iptables -D FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT ; iptables -t nat -D POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE

EOF

testing up

wg-quick up wg0
[#] sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] iptables -A FORWARD -i wg0 -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE

testing down

wg-quick down wg0
[#] iptables -D FORWARD -i wg0 -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -D POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE
[#] ip link delete dev wg0
[#] sysctl -w net.ipv4.ip_forward=0
net.ipv4.ip_forward = 0

Start at boot time

systemctl enable wg-quick@wg0

and status

systemctl status wg-quick@wg0

Get the wireguard server public key

We have NOT finished yet with the server !

Also we need the output of:

cat /etc/wireguard/publickey

output should be something like this:

wr3jUAs2qdQ1Oaxbs1aA0qrNjogY6uqDpLop54WtQI4=

Wireguard Client - Ubuntu 20.04 LTS

Now we need to move to our client virtual machine.
It is a lot easier but similar with above commands.

Client Key Generation

Generate a random private and public key for the client

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

Make the keys read-only

chmod 0400 /etc/wireguard/*key

Wireguard Client Configuration File

Need to replace <wireguard server public key> with the output of the above command.

Configuration is similar but simpler from the server

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8
WG_PORT=61194
WGS_IP=$(ip -4 -json route list default | jq -r .[0].prefsrc)
WSG_PUBLIC_KEY="<wireguard server public key>"

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address    = ${NETWORK}.2/24
PrivateKey = ${PRIVATE_KEY}

[Peer]
PublicKey  = ${WSG_PUBLIC_KEY}
Endpoint   = ${WGS_IP}:${WG_PORT}
AllowedIPs = 0.0.0.0/0

EOF

verify

wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.2/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] wg set wg0 fwmark 51820
[#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
[#] iptables-restore -n

start client at boot

systemctl enable wg-quick@wg0

Get the wireguard client public key

We have finished with the client, but we need the output of:

cat /etc/wireguard/publickey

Wireguard Server - Peers

As we mentioned above, we need to exchange public keys of server & client to the other machines in order to encrypt network traffic.

So as we get the client public key and run the below script to the server

WSC_PUBLIC_KEY="<wireguard client public key>"
NETWORK=10.0.8

wg set wg0 peer ${WSC_PUBLIC_KEY} allowed-ips ${NETWORK}.2

after that we can verify that our wireguard server and connect to the wireguard client

wireguard server ping to client

$ ping -c 5 10.0.8.2

PING 10.0.8.2 (10.0.8.2) 56(84) bytes of data.
64 bytes from 10.0.8.2: icmp_seq=1 ttl=64 time=0.714 ms
64 bytes from 10.0.8.2: icmp_seq=2 ttl=64 time=0.456 ms
64 bytes from 10.0.8.2: icmp_seq=3 ttl=64 time=0.557 ms
64 bytes from 10.0.8.2: icmp_seq=4 ttl=64 time=0.620 ms
64 bytes from 10.0.8.2: icmp_seq=5 ttl=64 time=0.563 ms

--- 10.0.8.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4103ms
rtt min/avg/max/mdev = 0.456/0.582/0.714/0.084 ms

wireguard client ping to server

$ ping -c 5 10.0.8.1

PING 10.0.8.1 (10.0.8.1) 56(84) bytes of data.
64 bytes from 10.0.8.1: icmp_seq=1 ttl=64 time=0.752 ms
64 bytes from 10.0.8.1: icmp_seq=2 ttl=64 time=0.639 ms
64 bytes from 10.0.8.1: icmp_seq=3 ttl=64 time=0.622 ms
64 bytes from 10.0.8.1: icmp_seq=4 ttl=64 time=0.625 ms
64 bytes from 10.0.8.1: icmp_seq=5 ttl=64 time=0.597 ms

--- 10.0.8.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
rtt min/avg/max/mdev = 0.597/0.647/0.752/0.054 ms

wireguard server - Peers at configuration file

Now the final things is to update our wireguard server with the client Peer (or peers).

wg showconf wg0

the output should be something like that:

[Interface]
ListenPort = 61194
PrivateKey = <server: private key>

[Peer]
PublicKey = <client: public key>
AllowedIPs = 10.0.8.2/32
Endpoint = 192.168.122.68:52587

We need to append the peer section to our configuration file.

so /etc/wireguard/wg0.conf should look like this:

[Interface]
Address    = 10.0.8.1/24
ListenPort = 61194
PrivateKey = <server: private key>

PreUp    = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

PostUp  = iptables -A FORWARD -i %i -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE
PreDown = iptables -D FORWARD -i %i -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -D POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE

[Peer]
PublicKey = <client: public key>
AllowedIPs = 10.0.8.2/32
Endpoint = 192.168.122.68:52587

SaveConfig

In None of server or client wireguard configuration file, we didn’t declare this option. If set, then the configuration is saved on the current state of wg0 interface. Very useful !

SaveConfig = true

client firewall

It is time to introduce our twist!

If you mtr or traceroute our traffic from our client, we will notice that in need our network traffic goes through the wireguard vpn server !

               My traceroute  [v0.93]
wgc (10.0.8.2)                        2021-07-21T22:41:44+0300

                          Packets               Pings
 Host                Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 10.0.8.1         0.0%    88    0.6   0.6   0.4   0.8   0.1
 2. myhomepc         0.0%    88    0.8   0.8   0.5   1.0   0.1
 3. _gateway         0.0%    88    3.8   4.0   3.3   9.6   0.8
 ...

The first entry is:

1. 10.0.8.1  

if we drop our vpn connection.

wg-quick down wg0

We still go to the internet.

                     My traceroute  [v0.93]
wgc (192.168.122.68)                     2021-07-21T22:45:04+0300

                            Packets               Pings
 Host                   Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. myhomepc             0.0%     1    0.3   0.3   0.3   0.3   0.0
 2. _gateway             0.0%     1    3.3   3.3   3.3   3.3   0.0

This is important because in some use cases, we do not want our client to directly or unsupervised talk to the internet.

UFW alternative for the client

So to avoid this issue, we will re-rewrite our firewall rules.

A simple script to do that is the below. Declares the default policy to DENY for everything and only accepts ssh incoming traffic and outgoing traffic through the vpn.

DEF_ROUTE=$(ip -4 -json route list default | jq -r .[0].dev)
WGS_IP=192.168.122.69
WG_PORT=61194

## reset
ufw --force reset

## deny everything!

ufw default deny incoming
ufw default deny outgoing
ufw default deny forward

## allow ssh
ufw allow 22/tcp

## enable
ufw --force enable

## allow traffic out to the vpn server
ufw allow out on ${DEF_ROUTE} to ${WGS_IP} port ${WG_PORT}

## allow tunnel traffic out
ufw allow out on wg+
# ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), deny (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)             

192.168.122.69 61194       ALLOW OUT   Anywhere on ens3
Anywhere                   ALLOW OUT   Anywhere on wg+
Anywhere (v6)              ALLOW OUT   Anywhere (v6) on wg+

DNS

There is caveat. Please bare with me.

Usually and especially in virtual machines they get DNS setting through a local LAN. We can either allow traffic on the local vlan or update our local DNS setting so it can go only through our VPN

cat > /etc/systemd/resolved.conf <<EOF
[Resolve]
DNS=88.198.92.222

EOF

systemctl restart systemd-resolved

let’s try this

# ping google.com
ping: google.com: Temporary failure in name resolution

Correct, fire up wireguard vpn client

wg-quick up wg0

ping -c4 google.com
PING google.com (142.250.185.238) 56(84) bytes of data.
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=1 ttl=115 time=43.5 ms
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=2 ttl=115 time=44.9 ms
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=3 ttl=115 time=43.8 ms
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=4 ttl=115 time=43.0 ms

--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 42.990/43.795/44.923/0.707 ms

mtr shows:

mtr google.com
                My traceroute  [v0.93]
wgc (10.0.8.2)                          2021-07-21T23:01:16+0300

                                        Packets               Pings
 Host                                 Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 10.0.8.1                           0.0%     2    0.6   0.7   0.6   0.7   0.1
 2. _gateway                           0.0%     1    0.8   0.8   0.8   0.8   0.0
 3. 192.168.1.1                        0.0%     1    3.8   3.8   3.8   3.8   0.0

drop vpn connection

# wg-quick down wg0 

[#] ip -4 rule delete table 51820
[#] ip -4 rule delete table main suppress_prefixlength 0
[#] ip link delete dev wg0
[#] iptables-restore -n
# ping -c4 google.com
ping: google.com: Temporary failure in name resolution

and that is perfect !

No internet access for our client. Only when our vpn connection is up!

WireGuard: fast, modern, secure VPN tunnel

That’s it!

Tag(s): wireguard, vpn

Tuesday, 20 July 2021

Progress on PGPainless Development

Not much time has passed since I last wrote about my progress on the PGPainless library. However, I feel like its time for an update.

Since the big 0.2.0 release, 4 further releases, 0.2.1 through 0.2.4 have been published. Taken together, the changes are quite substantial, so let me summarize.

Image of capsule tower in Japan
Photo by Raphael Koh on Unsplash

Modular SOP Implementation

The (in my opinion) most exciting change is that there now is an experimental module of java interfaces that model the Stateless OpenPGP Protocol (SOP). This module named sop-java is completely independent from PGPainless and has no external dependencies whatsoever. Its basically a port of Sequoia-PGP’s sop crate (which in term is based around the Stateless OpenPGP Command Line Interface specification) to Java.

Applications that want to execute basic OpenPGP operations can depend on this interface and decide on the concrete implementation later without locking themselves in with one fixed implementation. Remember:

The Database Is a Detail. […] The Web Is a Detail. […] Frameworks Are Details.

Uncle Bob – Clean Architecture

The module sop-java-picocli contains a CLI frontend for sop-java. It uses the picocli library to closely model the Stateless OpenPGP Command Line Interface (SOP-CLI) specification (version 1 for now).

The exciting part is that this module too is independent from PGPainless, but it can be used by any library that implements sop-java.

Next up, the contents of pgpainless-sop drastically changed. While up until recently it contained a fully fledged SOP-CLI application which used pgpainless-core directly, it now no longer contains command line application code, but instead an implementation of sop-java using pgpainless-core. Therefore pgpainless-sop can be used as a drop-in for sop-java, making it the first java-based SOP implementation (to my knowledge).

Lastly, pgpainless-cli brings sop-java-picocli and pgpainless-sop together. The code does little more than to plug pgpainless-sop as SOP backend into the command line application, resulting in a fully functional OpenPGP command line application (basically what pgpainless-sop was up until release 0.2.3, just better :P).

$ ./pgpainless-cli help
Usage: pgpainless-cli [COMMAND]
Commands:
  help          Displays help information about the specified command
  armor         Add ASCII Armor to standard input
  dearmor       Remove ASCII Armor from standard input
  decrypt       Decrypt a message from standard input
  encrypt       Encrypt a message from standard input
  extract-cert  Extract a public key certificate from a secret key from
                  standard input
  generate-key  Generate a secret key
  sign          Create a detached signature on the data from standard input
  verify        Verify a detached signature over the data from standard input
  version       Display version information about the tool

The exciting part about this modular design is that if YOU are working on an OpenPGP library for Java, you don’t need to re-implement a CLI frontend on your own. Instead, you can implement the sop-java interface and benefit from the CLI provided by sop-java-picocli for free.

If you are a library consumer, depending on sop-java instead of pgpainless-core would allow you to swap out PGPainless for another library, should any emerge in the future. It also means that porting your application to other platforms and languages might become easier, thanks to the more or less fixed API provided by the SOP protocol.

Further Changes

There are some more exciting changes worth mentioning.

The whole PGPainless suite can now be built reproducibly!

$ gradle --quiet clean build &> /dev/null && md5sum {.,pgpainless-core,pgpainless-sop,pgpainless-cli,sop-java,sop-java-picocli}/build/libs/*.jar

e7e9f45eb9d74540092920528bb0abf0  ./build/libs/PGPainless-0.2.4.jar
8ab68285202c8a303692c7332d15c2b2  pgpainless-core/build/libs/pgpainless-core-0.2.4.jar
a9c1d7b4a47d5ec66fc65131c14f4848  pgpainless-sop/build/libs/pgpainless-sop-0.2.4.jar
08cfb620a69015190e45d66548b8ea0f  pgpainless-cli/build/libs/pgpainless-cli-0.2.4.jar
e309d5a8d3a9439c6fae1c56150d9d07  sop-java/build/libs/sop-java-0.2.4.jar
9901849535f57f04b615afb06216ae5c  sop-java-picocli/build/libs/sop-java-picocli-0.2.4.jar

It actually was not hard at all to achieve reproducibility. The command line application has a version command, which extracted the current application version by accessing a version.properties file which would be written during the Gradle build.

Unfortunately, Java’s implementation of the Properties class includes a timestamp when writing out the object into a PrintStream. Therefore the result was not reproducible. The fix was to write the file manually, without using a Properties object.

Furthermore, the APIs for decryption/verification were further simplified, following the example of the encryption API. Instead of chained builder subclasses, there now is a single builder class which is used to receive decryption keys and public key certificates etc.

If you need more details about what changed in PGPainless, there now is a changelog file.

Friday, 16 July 2021

LibreDNS DnsOverTLS no ads with systemd-resolved

Below my personal settings -as of today- for LibreDNS using systemd-resolved service for DNS resolution.

sudo vim /etc/systemd/resolved.conf

basic settings

[Resolve]
DNS=116.202.176.26:854#dot.libredns.gr
DNSOverTLS=yes
FallbackDNS=88.198.92.222
Cache=yes

apply

sudo systemctl restart systemd-resolved.service

verify

resolvectl query analytics.google.com

analytics.google.com: 0.0.0.0                  -- link: eth0

-- Information acquired via protocol DNS in 144.7ms.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: yes
-- Data from: network

Explain Settings

DNS setting

DNS=116.202.176.26:854#dot.libredns.gr

We declare the IP of our DoT service. Using : as a separator we add the no-ads TCP port of DoT, 854. We also need to add our domain in the end to tell systemd-resolved that this IP should respond to dot.libredns.gr

Dns Over TLS

DNSOverTLS=yes

The usually setting is yes. In older systemd versions you can also select opportunistic.
As we are using Lets Encrypt systemd-resolved can not verify (by default) the IP inside the certificate. The type of certificate can verify the domain dot.libredns.gr but we are asking the IP: 116.202.176.26 and this is another type of certificate that is not free. In order to “fix” this , we added the #dot.libredns.gr in the above setting.

dotlibrednsgr.png

FallBack

Yes not everything has Five nines so you may need a fall back dns to .. fall. Be aware this is cleartext traffic! Not encrypted.

FallbackDNS=88.198.92.222

Cache

Last but not least, caching your queries can give provide you with an additional speed when browsing the internet ! You already asked this a few seconds ago, why not caching it on your local system?

Cache=yes

to give you an example

resolvectl query analytics.google.com

analytics.google.com: 0.0.0.0                  -- link: eth0

-- Information acquired via protocol DNS in 144.7ms.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: yes
-- Data from: network

second time:

resolvectl query analytics.google.com
analytics.google.com: 0.0.0.0                  -- link: eth0

-- Information acquired via protocol DNS in 2.3ms.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: yes
-- Data from: cache
Tag(s): LibreDNS, systemd, DoT

Saturday, 10 July 2021

KDE Gear 21.08 releases branches created

Make sure you commit anything you want to end up in the 21.08 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 15 of July.

More interesting dates:
  • July 29: 21.08 RC (21.07.90) Tagging and Release
  • August  5: 21.08 Tagging
  • August 12: 21.08 Release

https://community.kde.org/Schedules/release_service/21.08_Release_Schedule

offlineimap - unicode decode errors

My main system is currently running Ubuntu 21.04. For e-mail I'm relying on neomutt together with offlineimap, which both are amazing tools. Recently offlineimap was updated/moved to offlineimap3. Looking on my system, offlineimap reports itself as OfflineIMAP 7.3.0 and dpkg tells me it is version 0.0~git20210218.76c7a72+dfsg-1.

Unicode Decode Error problem

Today I noticed several errors in my offlineimap sync log. Basically the errors looked like this:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 1299: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xeb in position 1405: invalid continuation byte

Solution

If you encounter it as well (and you use mutt or neomutt), please have a look at this great comment on Github from Joseph Ishac (jishac) since his tip solved the issue for me.

To "fix" this issue for future emails, I modified my .neomuttrc and commented out the default send encoding charset and omitted the iso-8859-1 part:

#set send_charset = "us-ascii:iso-8859-1:utf-8"
set send_charset = "us-ascii:utf-8"

Then I looked through the email files on the filesystem and identified the ISO-8859 encoded emails in the Sent folder which are causing the current issues:

$ file * | grep "ISO-8859"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: ISO-8859 text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   ISO-8859 text
1525607314.R13283589178011616624.desktop:2,S:                                ISO-8859 text

That left me with opening the files with vim and saving them with the correct encoding:

:set fileencoding=utf8
:wq

Voila, mission accomplished:

$ file * | grep "UTF-8"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: UTF-8 Unicode text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   UTF-8 Unicode text
1525607314.R13283589178011616624.desktop:2,S:                                UTF-8 Unicode text

Sunday, 27 June 2021

Everything we release in KDE Gear is maintained

During Akademy some members of the community said stuff like "app/library XXX and YYY are not maintained even if we ship them in KDE Gear".


Which is not true, everything released in KDE Gear is maintained, there may not be an explicit maintainer, but there is shared community maintenance.


When confronting a particular person about it, they said "but look, there has not been any new commit that isn't either translation improvements or adapting to code deprecations/etc.", to which i said, "yes, that is exactly what maintenance means".


Road maintenance doesn't mean adding more lanes to the road every month, just means making sure the road does not break and fixing it a bit when/if it happens.


The same applies to software maintenance, the fact that there are no new features does not mean the software is not maintained.


Now, it is true that with community maintenance it's possible that the greater we has missed that something has stopped working, in that case, please raise it up, and we'll evaluate whether it's fixable or if indeed we want to declare that particular bit of software as unmaintained and stop shipping it in KDE Gear.

Wednesday, 23 June 2021

How the Netherlands group grew in Covid times

Before André and I became coordinators for the Netherlands, we used to have yearly community meetings at both the Dutch T-Dose and the Belgian FOSDEM conferences. In order to build the FSFE NL community, as new coordinators we started having a regular booth at the national Linux user group (NLLGG) that met bi-monthly in Utrecht, which is quite central in the Netherlands. The NLLGG has a large space for booths, so we could do our outreach alongside the various sessions and other booths that were going on. Via that regular booth, we had more frequent interactions with like-minded people, including the friends we had made over the years. Of course there were still people that were interested to stay in touch with the FSFE, but didn’t want to travel half the country. At least we had email and chat as an option for those more distant relationships.

 

And than Covid came to the Netherlands, and we were forced to change our ways. We could no longer meet at the physical NLLGG location. There was an online NLLGG session we joined, but as expected the main focus was on the Linux users there and not on our overlapping FSFE group. Eventually in the autumn we just tried our luck with an online session of our own. Luckily the FSFE had just launched their conference server based on BigBlueButton, so the required freedom-respecting infrastructure was already in place. We held our first meeting on the 28th of October 2020, which we announced on the FSFE website, on our XMPP group and on the Netherlands mailinglist (contact details on our BNL wiki page).

The first meeting was a bit rough. As can be expected with the hotchpotch of computer setups, there were various issues with audio and webcams. Still we had a nice meeting of about 1,5 hours to discuss various topics that were keeping our minds occupied. With everybody locked up at home, it was a welcome change to chat to the people with similar interest you would normally only meet by going to a conference or other community event. The format of the meeting was very much the same as at the booth, just to have a relaxed group conversation on free software related issues.

We kept on doing the online meetings by just scheduling another one at the end of the meeting. We recently had our 9th online get-together already. The attendance varies somewhere between the 5 and 9 persons. In the mean time we have settled on the 3rd Wednesday of the month, so it has be come a regular thing with a regular attendance. Every meeting is somewhat of a surprise, because you just don’t know exactly what it will bring. Some new people might join, there might be some new and interesting subjects being tabled, and there could be a strong collaboration on an opportunity. The last meeting we started compiling a list of topics beforehand on an etherpad, so we can make an explicit decision which topics to spend time discussing.

We had one special occasion on the 25 of November 2020 when we had a sneak preview of the Dutch audio translation of the PMPC video. There was a larger attendance than usual, and Matthias was kind enough to join and speak some kind words about our local efforts.

In our meetings there is a lively discussion on current free software related topics, which helps to form opinions and perhaps even a plan of action. I like it when these discussions result in an idea of an action we could take from the FSFE perspective like reaching out to politicians or governmental organizations. The Dutch supporters have quite a bit of activism experience among them. Besides the general discussion plenty of day-to-day advice is shared how to maintain or increase your practical freedom.

The meetings are open to a wider audience, so if you are interested in attending, just sign up by emailing me and I’ll send the meeting room details. The meetings are announced on the FSFE events page. We have successfully switched to English in multiple occasions to be inclusive to other nationalities, so language shouldn’t be a problem. So maybe I’ll see you there?

Friday, 11 June 2021

I'm back in the boat

In mid-2014 I first heard about Jolla and Sailfish OS and immediately bought a Jolla 1; wrote apps; participated in the IGG campaign for Jolla Tablet; bought the TOHKBD2; applied for (and got) Jolla C.

Sounds like the beginning of a good story doesn’t it?

Well, by the beginning of 2017 I had sold everything (except the tablet, we all know what happened to that one).

So what happened?? I was a happy Sailfish user, but Jolla’s false promises disappointed me.

Yet, despite all that, I still think about Sailfish OS to this day. I think it’s because, despite some proprietary components, the ecosystem around Sailfish OS is ultimately open source. And that’s what interests me. It also got a fresh update which solves some of the problems that where there 5 years ago.

Nowadays, thanks to the community, Sailfish OS can be installed on many devices, even if with some less components, but I’m looking for that complete experience and so I asked on the forum if there was someone willing to sell his Xperia device with or without the license… and I got one for free. Better still, in exchange for some apps!

To decide which applications to create, I therefore took a look at that ecosystem. I started with the apps I use daily on Android and looked for the Sailfish OS alternative (spoiler: I’m impressed, good job guys!).

I am writing them all here because I am sure it will be useful to someone else:

  • AntennaPod (podcast app) -> PodQast
  • Ariane (gemini protocol browser)
  • AsteroidOS (AsteroidOS sync) -> Starfish
  • Connectbot (ssh client) -> built-in (Terminal)
  • Conversation (xmpp client) -> built-in (Messaging)
  • Davx5 (caldav/cardav) -> built-in (Account)
  • DroidShows (TV series) -> SeriesFinale
  • Element (Matrix client) -> Determinant
  • Endoscope (camera stream)
  • Fedilab (Mastodon client) -> Tooter
  • ForkHub (GitHub client) -> SailHub
  • FOSS Browser -> built-in (Browser)
  • FreeOTP -> SailOTP
  • Glider (hacker news reader) -> SailHN
  • K-9 Mail -> built-in (Mail)
  • KDE Connect (KDE sync) -> SailfishConnect
  • Keepassx (password manager) -> ownKeepass
  • Labcoat (GitLab client)
  • Lemmur (Lemmy client)
  • MasterPassword (password manager) -> MPW
  • MuPDF (PDF reader) -> built-in (Documents)
  • Newpipe (YouTube client) -> YTPlayer
  • Nextcloud (Nextcloud files) -> GhostCloud
  • Notes (Nextcloud notes) -> Nextcloud Notes
  • OCReader (Nextcloud RSS) -> Fuoten
  • OsmAnd~ (Maps) -> PureMaps
  • Printing (built-in) -> SeaPrint
  • QuickDic (dictionary) -> Sidudict
  • RedMoon (screen color temperature) -> Tint Overlay
  • RedReader (Reddit client) -> Quickddit
  • Signal -> Whisperfish
  • Syncthing (files sync) -> there’s the binary, no UI
  • Transdroid (Trasmission client) -> Tremotes
  • Vinyl (music player) -> built-in (Mediaplayer)
  • VLC (NFS streaming) -> videoPlayer
  • WireGuard (VPN) -> there’s the binary, no UI
  • YetAnotherCallBlocker (call blocker) -> Phonehook

So, to me it looks like almost everything is there, except:

I’ve already started to write a UI for Syncthing, then maybe I could write the browser for the gemini protocol or rather the GitLab client?

Please consider a donation if you would like to support me (mention your favourite project!).

Liberapay

Many many thanks to Jörg who sent me his Sony Xperia 10 Plus! I hope I don’t disappoint him!

Wednesday, 09 June 2021

KDE Gear 21.08 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/KDE_Gear_21.08_Schedule

 
Dependency freeze is in four weeks (July 8) and Feature Freeze a week after that, make sure you start finishing your stuff!

Saturday, 05 June 2021

Deployed my blog on Kubernetes

One of the most well-known k8s memes is the below image that represent the effort and complexity on building a kubernetes cluster just to run a simple blog. So In this article, I will take the opportunity to install a simple blog engine on kubernetes using k3s!

k8s_blog.jpg

terraform - libvirt/qemu - ubuntu

For this demo, I will be workinig on my local test lab. A libvirt /qemu ubuntu 20.04 virtual machine via terraform. You can find my terraform notes on my github repo tf/0.15/libvirt/0.6.3/ubuntu/20.04.

k3s

k3s is a lightweight, fully compliant kubernetes distribution that can run on a virtual machine, single node.

login to your machine and became root

$ ssh 192.168.122.42 -l ubuntu
$ sudo -i
#

install k3s with one command

curl -sfL https://get.k3s.io | sh -

output should be something like this

[INFO]  Finding release for channel stable

[INFO]  Using v1.21.1+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.1+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.1+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Firewall Ports

I would propose to open the below network ports so k3s can run smoothly.

Inbound Rules for K3s Server Nodes

PROTOCOL PORT SOURCE DESCRIPTION
TCP 6443 K3s agent nodes Kubernetes API Server
UDP 8472 K3s server and agent nodes Required only for Flannel VXLAN
TCP 10250 K3s server and agent nodes Kubelet metrics
TCP 2379-2380 K3s server nodes Required only for HA with embedded etcd

Typically all outbound traffic is allowed.

ufw allow

ufw allow 6443/tcp
ufw allow 8472/udp
ufw allow 10250/tcp
ufw allow 2379/tcp
ufw allow 2380/tcp

full output

# ufw allow 6443/tcp
Rule added
Rule added (v6)

# ufw allow 8472/udp
Rule added
Rule added (v6)

# ufw allow 10250/tcp
Rule added
Rule added (v6)

# ufw allow 2379/tcp
Rule added
Rule added (v6)

# ufw allow 2380/tcp
Rule added
Rule added (v6)

k3s Nodes / Pods / Deployments

verify nodes, roles, pods and deployments

# kubectl get nodes -A
NAME         STATUS   ROLES                  AGE   VERSION
ubuntu2004   Ready    control-plane,master   11m   v1.21.1+k3s1

# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   helm-install-traefik-crd-8rjcf            0/1     Completed   2          13m
kube-system   helm-install-traefik-lwgcj                0/1     Completed   3          13m
kube-system   svclb-traefik-xtrcw                       2/2     Running     0          5m13s
kube-system   coredns-7448499f4d-6vrb7                  1/1     Running     5          13m
kube-system   traefik-97b44b794-q294l                   1/1     Running     0          5m14s
kube-system   local-path-provisioner-5ff76fc89d-pq5wb   1/1     Running     6          13m
kube-system   metrics-server-86cbb8457f-n4gsf           1/1     Running     6          13m

# kubectl get deployments -A
NAMESPACE     NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                  1/1     1            1           17m
kube-system   traefik                  1/1     1            1           8m50s
kube-system   local-path-provisioner   1/1     1            1           17m
kube-system   metrics-server           1/1     1            1           17m

Helm

Next thing is to install helm. Helm is a package manager for kubernetes, it will make easy to install applications.

curl -sL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

output

Downloading https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
helm version

version.BuildInfo{Version:"v3.6.0", GitCommit:"7f2df6467771a75f5646b7f12afb408590ed1755", GitTreeState:"clean", GoVersion:"go1.16.3"}

repo added

As a package manager, you can install k8s packages, named charts and you can find a lot of helm charts here https://artifacthub.io/. You can also add/install a single repo, I will explain this later.

# helm repo add nicholaswilde https://nicholaswilde.github.io/helm-charts/

"nicholaswilde" has been added to your repositories

# helm repo update
Hang tight while we grab the latest from your chart repositories...

Successfully got an update from the "nicholaswilde" chart repository
Update Complete. ⎈Happy Helming!⎈

hub Vs repo

basic difference between hub and repo is that hub is the official artifacthub. You can search charts there

helm search hub blog
URL                                                 CHART VERSION   APP VERSION DESCRIPTION
https://artifacthub.io/packages/helm/nicholaswi...  0.1.2           v1.3        Lightweight self-hosted facebook-styled PHP blog.
https://artifacthub.io/packages/helm/nicholaswi...  0.1.2           v2021.02    An ultra-lightweight blogging engine, written i...
https://artifacthub.io/packages/helm/bitnami/dr...  10.2.23         9.1.10      One of the most versatile open source content m...
https://artifacthub.io/packages/helm/bitnami/ghost  13.0.13         4.6.4       A simple, powerful publishing platform that all...
https://artifacthub.io/packages/helm/bitnami/jo...  10.1.10         3.9.27      PHP content management system (CMS) for publish...
https://artifacthub.io/packages/helm/nicholaswi...  0.1.1           0.1.1       A Self-Hosted, Twitter™-like Decentralised micr...
https://artifacthub.io/packages/helm/nicholaswi...  0.1.1           900b76a     A self-hosted well uh wiki engine or content ma...
https://artifacthub.io/packages/helm/bitnami/wo...  11.0.13         5.7.2       Web publishing platform for building blogs and ...

using a repo, means that you specify charts sources from single (or multiple) repos, usally outside of hub.

helm search repo blog
NAME                        CHART VERSION   APP VERSION DESCRIPTION
nicholaswilde/blog          0.1.2           v1.3        Lightweight self-hosted facebook-styled PHP blog.
nicholaswilde/chyrp-lite    0.1.2           v2021.02    An ultra-lightweight blogging engine, written i...
nicholaswilde/twtxt         0.1.1           0.1.1       A Self-Hosted, Twitter™-like Decentralised micr...
nicholaswilde/wiki          0.1.1           900b76a     A self-hosted well uh wiki engine or content ma...

Install a blog engine via helm

before we continue with the installation of our blog engine, we need to set the kube config via a shell variable

kube configuration yaml file

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

kubectl-k3s, already knows where to find this yaml configuration file. kubectl is a link to k3s in our setup

# whereis kubectl
kubectl: /usr/local/bin/kubectl

# ls -l /usr/local/bin/kubectl
lrwxrwxrwx 1 root root 3 Jun  4 23:20 /usr/local/bin/kubectl -> k3s

but not helm that we just installed.

After that we can install our blog engine.

helm install chyrp-lite              \
  --set env.TZ="Europe/Athens"  \
  nicholaswilde/chyrp-lite

output

NAME: chyrp-lite
LAST DEPLOYED: Fri Jun  4 23:46:04 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the application URL by running these commands:
  http://chyrp-lite.192.168.1.203.nip.io/

for the time being, ignore nip.io and verify the deployment

# kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
chyrp-lite   1/1     1            1           2m15s

# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
chyrp-lite-5c544b455f-d2pzm   1/1     Running   0          2m18s

Port Forwarding

as this is a pod running through k3s inside a virtual machine on our host operating system, in order to visit the blog and finish the installation we need to expose the port.

Let’s find out if there is a service running

kubectl get service chyrp-lite

output

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
chyrp-lite   ClusterIP   10.43.143.250   <none>        80/TCP    11h

okay we have a cluster ip.

you can also verify that our blog engine is running

curl -s 10.43.143.250/install.php | head

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

and then port forward the pod tcp port to our virtual machine

kubectl port-forward service/chyrp-lite 80

output

Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80

k3s issue with TCP Port 80

Port 80 used by build-in load balancer by default

That means service port 80 will become 10080 on the host, but 8080 will become 8080 without any offset.

So the above command will not work, it will give you an 404 error.
We can disable LoadBalancer (we do not need it for this demo) but it is easier to just forward the service port to 10080

kubectl port-forward service/chyrp-lite 10080:80
Forwarding from 127.0.0.1:10080 -> 80
Forwarding from [::1]:10080 -> 80
Handling connection for 10080
Handling connection for 10080

from our virtual machine we can verify

curl -s http://127.0.0.1:10080/install.php  | head

it will produce

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

ssh port forward

So now, we need to forward this TCP port from the virtual machine to our local machine. Using ssh, you should be able to do it like this from another terminal

ssh 192.168.122.42 -l ubuntu -L8080:127.0.0.1:10080

verify it

$ sudo ss -n -t -a 'sport = :10080'

State           Recv-Q          Send-Q                   Local Address:Port                    Peer Address:Port         Process
LISTEN          0               128                          127.0.0.1:10080                        0.0.0.0:*
LISTEN          0               128                              [::1]:10080                           [::]:*

$ curl -s http://localhost:10080/install.php | head

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

I am forwarding to a high tcp port (> 1024) so my user can open a tcp port, eitherwise I need to be root.

finishing the installation

To finish the installation of our blog engine, we need to visit the below url from our browser

http://localhost:10080/install.php

Database Setup

chyrplite01.jpg

Admin Setup

chyrplite02.jpg

Installation Completed

chyrplite03.jpg

First blog post

chyrplite04.jpg

that’s it !

Wednesday, 02 June 2021

PGPainless 0.2 Released!

I’m very proud and excited to announce the release of PGPainless version 0.2! Since the last stable release of my OpenPGP library for Java and Android 9 months ago, a lot has changed and improved! Most importantly development on PGPainless is being financially sponsored by FlowCrypt, so I was able to focus a lot more energy into working on the library. I’m very grateful for this opportunity 🙂

A red letter, sealed with a wax seal
Photo by Natasya Chen on Unsplash

PGPainless is using Bouncycastle, but aims to save developers from the pain of writing lots of boilerplate code, while at the same time using the BC API properly. The new release is available on maven central, the source code can be found on Github and Codeberg.

PGPainless is now depending on Bouncycastle 1.68 and the minimum Android API level has been raised to 10 (Android 2.3.3). Let me bring you up to speed about some of its features and the recent changes!

Inspect Keys!

Back in the last stable release, PGPainless could already be used to generate keys. It offered some shortcut methods to quickly generate archetypal keys, such as simple RSA keys, or key rings based on elliptic curves. In version 0.2, support for additional key algorithms was added, such as EdDSA or XDH.

The new release introduces PGPainless.inspectKeyRing(keyRing) which allows you to quickly access information about a key, such as its user-ids, which subkeys are encryption capable and which can be used to sign data, their expiration dates, algorithms etc.

Furthermore this feature can be used to evaluate a key at a certain point in time. That way you can quickly check, which key flags or algorithm preferences applied to the key 3 weeks ago, when that key was used to create that signature you care about. Or you can check, which user-ids your key had 5 years ago.

Edit Keys!

Do you already have a key, but want to extend its expiration date? Do you have a new Email address and need to add it as a user-id to your key? PGPainless.modifyKeyRing(keyRing) allows basic modification of a key. You can add additional user-ids, adopt subkeys into your key, or expire/revoke existing subkeys.

secretKeys = PGPainless.modifyKeyRing(secretKeys)
                       .setExpirationDate(expirationDate, keyRingProtector)
                       .addSubKey(subkey, subkeyProtector, keyRingProtector)
                       .addUserId(UserId.onlyEmail("alice@pgpainless.org"), keyRingProtector)
                       .deleteUserId("alice@pgpainful.org", keyRingProtector)
                       .revokeSubkey(subKeyId, keyRingProtector)
                       .revokeUserId("alice@pgpaintrain.org", keyRingProtector)
                       .changePassphraseFromOldPassphrase(oldPass)
                           .withSecureDefaultSettings().toNewPassphrase(newPass)
                       .done();

Encrypt and Sign Effortlessly!

PGPainless 0.2 comes with an improved, simplified encryption/signing API. While the old API was already quite intuitive, I was focusing too much on the code being “autocomplete-friendly”. My vision was that the user could encrypt a message without ever having to read a bit of documentation, simply by typing PGPainless and then following the autocomplete suggestions of the IDE. While the result was successful in that, the code was not very friendly to bind to real-world applications, as there was not one builder class, but several (one for each “step”). As a result, if a user wanted to configure the encryption dynamically, they would have to keep track of different builder objects and cope with casting madness.

// Old API
EncryptionStream encryptionStream = PGPainless.createEncryptor()
        .onOutputStream(targetOuputStream)
        .toRecipient(aliceKey)
        .and()
        .toRecipient(bobsKey)
        .and()
        .toPassphrase(Passphrase.fromPassword("password123"))
        .usingAlgorithms(SymmetricKeyAlgorithm.AES_192, HashAlgorithm.SHA256, CompressionAlgorithm.UNCOMPRESSED)
        .signWith(secretKeyDecryptor, aliceSecKey)
        .asciiArmor();

Streams.pipeAll(plaintextInputStream, encryptionStream);
encryptionStream.close();

The new API is still intuitive, but at the same time it is flexible enough to be modified with future features. Furthermore, since the builder has been divided it is now easier to integrate PGPainless dynamically.

// New shiny 0.2 API
EncryptionStream encryptionStream = PGPainless.encryptAndOrSign()
        .onOutputStream(outputStream)
        .withOptions(
                ProducerOptions.signAndEncrypt(
                        new EncryptionOptions()
                                .addRecipient(aliceKey)
                                .addRecipient(bobsKey)
                                // optionally encrypt to a passphrase
                                .addPassphrase(Passphrase.fromPassword("password123"))
                                // optionally override symmetric encryption algorithm
                                .overrideEncryptionAlgorithm(SymmetricKeyAlgorithm.AES_192),
                        new SigningOptions()
                                // Sign in-line (using one-pass-signature packet)
                                .addInlineSignature(secretKeyDecryptor, aliceSecKey, signatureType)
                                // Sign using a detached signature
                                .addDetachedSignature(secretKeyDecryptor, aliceSecKey, signatureType)
                                // optionally override hash algorithm
                                .overrideHashAlgorithm(HashAlgorithm.SHA256)
                ).setAsciiArmor(true) // Ascii armor
        );

Streams.pipeAll(plaintextInputStream, encryptionStream);
encryptionStream.close();

Verify Signatures Properly!

The biggest improvement to PGPainless 0.2 is improved, proper signature verification. Prior to this release, PGPainless was doing what probably every other Bouncycastle-based OpenPGP library was doing:

PGPSignature signature = [...];
// Initialize the signature with the public signing key
signature.init(pgpContentVerifierBuilderProvider, signingKey);

// Update the signature with the signed data
int read;
while ((read = signedDataInputStream.read()) != -1) {
    signature.update((byte) read);
}

// Verify signature correctness
boolean signatureIsValid = signature.verify();

The point is that the code above only verifies signature correctness (that the signing key really made the signature and that the signed data is intact). It does however not check if the signature is valid.

Signature validation goes far beyond plain signature correctness and entails a whole suite of checks that need to be performed. Is the signing key expired? Was it revoked? If it is a subkey, is it bound to its primary key correctly? Has the primary key expired or revoked? Does the signature contain unknown critical subpackets? Is it using acceptable algorithms? Does the signing key carry the SIGN_DATA flag? You can read more about why signature verification is hard in my previous blog post.

After implementing all those checks in PGPainless, the library now scores second place on Sequoia’s OpenPGP Interoperability Test Suite!

Lastly, support for verification of cleartext-signed messages such as emails was added.

New SOP module!

Also included in the new release is a shiny new module: pgpainless-sop

This module is an implementation of the Stateless OpenPGP Command Line Interface specification. It basically allows you to use PGPainless as a command line application to generate keys, encrypt/decrypt, sign and verify messages etc.

$ # Generate secret key
$ java -jar pgpainless-sop-0.2.0.jar generate-key "Alice <alice@pgpainless.org>" > alice.sec
$ # Extract public key
$ java -jar pgpainless-sop-0.2.0.jar extract-cert < alice.sec > alice.pub
$ # Sign some data
$ java -jar pgpainless-sop-0.2.0.jar sign --armor alice.sec < message.txt > message.txt.asc
$ # Verify signature
$ java -jar pgpainless-sop-0.2.0.jar verify message.txt.asc alice.pub < message.txt 
$ # Encrypt some data
$ java -jar pgpainless-sop-0.2.0.jar encrypt --sign-with alice.sec alice.pub < message.txt > message.txt.asc
$ # Decrypt ciphertext
$ java -jar pgpainless-sop-0.2.0.jar decrypt --verify-with alice.pub --verify-out=verif.txt alice.sec < message.txt.asc > message.txt

The primary reason for creating this module though was that it enables PGPainless to be plugged into the interoperability test suite mentioned above. This test suite uncovered a ton of bugs and shortcomings and helped me massively to understand and interpret the OpenPGP specification. I can only urge other developers who work on OpenPGP libraries to implement the SOP specification!

Upstreamed changes

Even if you are not yet convinced to switch to PGPainless and want to keep using vanilla Bouncycastle, you might still benefit from some bugfixes that were upstreamed to Bouncycastle.

Every now and then for example, BC would fail to do some internal conversions of elliptic curve encryption keys. The source of this issue was that BC was converting keys from BigIntegers to byte arrays, which could be of invalid length when the encoding was having leading zeros, thus omitting one byte. Fixing this was easy, but finding the bug was taking quite some time.

Another bug caused decryption of messages which were encrypted for more than one key/passphrase to fail, when BC tried to decrypt a Symmetrically Encrypted Session Key Packet with the wrong key/passphrase first. The cause of this issue was that BC was not properly rewinding the decryption stream after reading a checksum, thus corrupting decryption for subsequent attempts with the correct passphrase or key. The fix was to mark and rewind the stream properly before the next decryption attempt.

Lastly some methods in BC have been modernized by adding generics to Iterators. Chances are if you are using BC 1.68, you might recognize some changes once you bump the dependency to BC 1.69 (once it is released of course).

Thank you!

I would like to thank anyone who contributed to the new release in any way or form for their support. Special thanks goes to my sponsor FlowCrypt for giving me the opportunity to spend so much time on the library! Furthermore I’d like to thank the all the amazing folks over at Sequoia-PGP for their efforts of improving the OpenPGP ecosystem and patiently helping me understand the (at times at bit muddy) OpenPGP specification.

Monday, 31 May 2021

How i ended up fixing a "not a bug" in Qt Quick that made apostrophes not being rendered while reviewing an Okular patch

But in Okular we don't use Qt Quick you'll say!


Well, we actually use Qt Quick in the mobile interface of Okular, but you're right, this was not a patch for the mobile version, so "But in Okular we don't use Qt Quick!"

 

The story goes like this.

I was reviewing https://invent.kde.org/graphics/okular/-/merge_requests/431 and was not super convinced of the look and feel of it, so Nate gave some examples and one example was "the sidebar in Elisa".

 

So I started Elisa and saw this

You probably don't speak Catalan so don't see that lesquerra is a mistake it should be l'esquerra

 

Oh the translators made a mistake i thought, so i went to the .po file https://websvn.kde.org/trunk/l10n-kf5/ca/messages/elisa/elisa.po?view=markup#l638 and oh surprise! the translators had not made a mistake (well not really a surprise since they're really good).


Ok, so what's wrong then?


I had a look at the .qml code https://invent.kde.org/multimedia/elisa/-/blob/release/21.04/src/qml/MediaPlayListView.qml#L333 and it really looked simple enough.

 

I had never seen a Kirigami.PlaceholderMessage before but looking at it, well the text ended up in a Kirigami.Heading and then in a QtQuick Label. Nothing really special.


Weeeeeeird.


Ok i said, let's replace the the whole xi18nc in there with the translated text and run elisa. And then i could see the ' properly.


Hmmmm, ok then the problem had to be xi18nc right? Well it was and it was not.


The problem is that xi18nc translates ' to &apos; to be more "html compliant" https://invent.kde.org/frameworks/ki18n/-/blob/master/src/kuitmarkup.cpp#L40

 

And unfortunately Qt Declarative 5.15 Text item default "html support" is something Qt calls "Styled Text" that only supports five HTML entities, and apos is not one of them https://code.qt.io/cgit/qt/qtdeclarative.git/tree/src/quick/util/qquickstyledtext.cpp?h=5.15.2#n558

 

So where is the bug?

 

The Text documentation clearly states that &apos; is in the list of supported entities.

One could say the bug is that every time we use xi18nc in a Text we should remember to change the "html support" to something Qt calls "RichText" that actually supports &apos;.


But we all know that's never going to happen :D, and in this case it would actually be hard since Kirigami.PlaceholderMessage doesn't even expose the format property of the inner text

So I convinced our friends at Qt that only supporting five entities is not great and now we support six ^_^ in Qt 6.2, 6.1 and 5.15 (if you use the KDE Qt patch collection) https://invent.kde.org/qt/qt/qtdeclarative/-/merge_requests/3/diffs


P.S: I have a patch in the works to support all HTML entities in StyledText but somehow it's breaking the tests, so well, need to work more on it :)

Tuesday, 25 May 2021

When it takes a pandemic ...

to understand the speed of innovation.

2020 was a special year for all of us - with "us" here meaning the entire world: Faced with a truly urgent global problem that year was a learning opportunity for everyone.

For me personally the year started like any other year - except that news coming out of China were troubling. Little did I know how fast those news would reach the rest of the world - little did I know the impact that this would have.

I started the year with FOSDEM in Brussels in February - like every other year, except it felt decidedly different going to this event with thousands of attendees, crammed into overfull university rooms.

Not a month later, travel budgets in many corporations had been frozen. The last in person event that I went to was FOSS Backstage - incapable of imagining just for how long this would be the last in person event I would go to. To this date I'm grateful for Bertrand for teaching the organising team just how much can be transported with video calls - and I'm still grateful for the technicians onsite that made speaker-attendee interaction seamless - across several hundred miles.

One talk from FOSS Backstage that I went back to over and over during 2020 and 2021 was the one given by Emmy Tsang on Open Science.

Referencing the then new pandemic she made a very impressive case for open science, for open collaboration - and for the speed of innovation that comes from that open collaboration. More than anything else I had heard or read before it made it clear to me what everyone means by explaining how Open Source (and by extension internally InnerSource) increases the speed of innovation substantially:

Instead of having everyone start from scratch, instead of wasting time and time again to build the basic foundation - instead we can all collaborate on that technological foundation and focus on key business differentiators. Or as Danese Cooper would put it: "All the boats must rise." The one thing that I found most amazing during this pandemic were moments during which we saw scientists from all sorts of disciplines work together - and doing so in a very open and accessible way. Among all the chaos and misinformation voices for reliable and dependable information emerged. We saw practitioners add value to the discussion.

With information shared in pre-print format, groups could move much faster than the usual one year innovation cycle. Yes, it meant more trash would make it as well. And still we wouldn't be where we are today if humans across the globe, no matter their nationality or background would have had a chance to collaborate and move faster as a result.

Somehow that's at a very large scale the same effect seen in other projects:

  • RoboCup only moved as fast as it did by opening up the solution of winning teams each year. As a result new teams would get a head start with designs and programs readily available. Instead of starting from scratch, they can stand on the shoulders of giants.
  • Open source helps achieve the same on a daily basis. There's a very visible sign for that: Perseverance on Mars is running on open source software. Every GitHub user, who in their life has contributed to software running on Perseverance today has a badge on their GitHub profile - there are countless badges serving as proof just how many hands it took, how much collaboration was necessary to make this project work.


For me one important learning during this pandemic was just how much we can achieve by working together, be collaborating and building bridges. In that sense, what we have seen is how it is possible to gain so much more by sharing - as Niels Basjes put it so nicely when explaining the Apache Way in one sentence: Gaining by Sharing.

In a sense this is what brought me to the InnerSource Commons Foundation - it's a way for all of us to experience the strenght of collaboration. It's a first step towards bringing more people and more businesses to the open source world, joining forces to solve issues ahead of us.

Monday, 24 May 2021

Sharing your loan details to anyone

A week ago, I blogged about a vulnerability in a platform that would allow anyone to download users’ amortisation schedules. This was a critical issue, but it wasn’t really exploitable in the wild as it included a part where you had to guess the name of the document to download.

I no longer trust that platform so I went to their website to remove my loan data from it, but apparently this isn’t possibile via the UI.

I also opened a ticket on their support platform to request removal and they replied that it isn’t possible.

So I went to their website with the intention of replacing the data with a fake one… but there was no longer an edit button!

Loans

I’m sure it was there before and in fact the code also confirms that it was there:

Loans code

However, the platform is based on Magento and so, starting from the current URL, we can easily guess the edit URL, e.g. https://<host>/anagraficamutui/mutuo/edit/id/<n>.

Let’s try 1… bingo!

But wait a minute… this isn’t my loan! Luckily it’s just a demo entry put in by some developer:

Someone else loan

Even though it’s a dummy page, we can already see the details of the loan such as the (hopefully) fake IBAN, or the loan total and loan number and even the bank contact person name and email address.

And now take a look at this: if I try to access that page in private mode, then I get the login page. All (almost) well, right?

Nope. Let’s try the same request via curl:

$ curl -s https://<host>/anagraficamutui/edit/id/1 | grep banca

<input type="text" name="istituto_credito" id="istituto_credito" value="banca acme" title="Nome istituto" class="input-text istituto_credito required-entry" />

$ curl -s https://<host>/anagraficamutui/edit/id/1 | grep NL75

<input type="text" name="iban" id="iban" value="NL75xxxxxxxxx" title="Iban" class="input-text iban required-entry validate-iban validate-length maximum-length-27 validate-alphanum" />

Wait a minute, what’s going on?

Well, it turns out that the page sets the location header to redirect you to the login page when there’s no cookie, otherwise it prints the HTML page!

$ curl -s https://<host>/anagraficamutui/edit/id/1 -I | grep location

location: https://<host>/customer/account/login/

Oh-no!

Conclusion

Data from 5723 loans could have been exposed by accessing a specific URL. Details such as IBAN, loan number, loan total and the bank account contact person could have been used to perform spear phishing attacks.

I reported this privacy flaw to the CSIRT Italia and the platform’s DPO. The issue has been solved after 2 days, but I still haven’t heard from them.

Wednesday, 19 May 2021

Sharing your amortisation schedule to anyone

Last month, my company allowed me to claim some benefits through a dedicated platform. This platform is specifically built for this purpose and allows you to recover these benefits not only in the form of coupons or discount codes, but also as reimbursements for medical visits or interest on mortgage payments.

I wanted to try the latter.

I logged on to the platform and then I filled in all the (many) details about the loan that the plaform asks you to fill in, until I had to upload my amortisation schedule which contains a lot of sensitive data. In fact, a strange thing happened at this step: my file was named document.pdf, but after uploading it was renamed to document_2.pdf.

How do I know? Well, let’s have a look to the UI:

Loan details

Loan details hover

It clearly shows the file name and that’s also a hyperlink. Let’s click then.

The PDF opens in my browser. This is expected, but what happens if we take the URL and try to open it in a private window?? Guess what?

You guessed it.

Let’s have a look to the URL again. It’s in the form: https://<host>/media/mutuo/file/d/o/document_2.pdf.

That’s tempting, isn’t?

I wanted to have some fun and I tried the following:

Loan download

Both the curl output and the checksums are enough to understand that some document has been downloaded there (but discarded since I didn’t download them to my disk…).

Thus, since the d and o parent folders match the two initial letters of my file, I successfully tried with stuff like:

  • /c/o/contratto.pdf, /c/o/contratto_2.pdf, …
  • /c/o/contract.pdf, …
  • /p/r/prospetto.pdf, …

and it does also work with numbers too (to find this out I had to upload a file named 1.pdf 😇), e.g. https://<host>/media/mutuo/file/1/_/1_10.pdf.

Conclusion

If you have uploaded your amortisation schedule to this platform, that in its website says it has more than 300k users from 3k different companies, well someone may have downloaded it.

I reported this privacy flaw to the CSIRT Italia via a PGP encrypted email; the CSIRT is supposed to write to the company that owns the platform to alert them to the problem, but a week later I still hadn’t heard from either of them. So after a week I pinged the CSIRT again, and they replied with a plain text email telling me that they had opened an internal ticket and were nice enough to embed my initial PGP encrypted email.

Two weeks later (about 21 days since my first mail) the platform fixed the problem (the uploaded file path isn’t deterministic anymore and authentication is in place), but I still haven’t heard from them.

Addendum

Since <host> is a third-level domain in my case, I used stuff like Sublist3r and Amass, but you can also use the online version hosted on nmmapper.com, to perform DNS enumeration and I found ~50 websites, 30 of which are aliases pointing to the same host. In fact, I could replace <host> with each of them and I would always download my document_2.pdf file.

Sunday, 16 May 2021

Artificial Intelligence safety: embracing checklists

Unfortunately, human errors are bound to happen. Checklists allows one to verify that all the required actions are correctly done, and in the correct order. The military has it, the health care sector has it, professional diving has it, the aviation and space industries have it, software engineering has it. Why not artificial intelligence practitioners?

In October 1935, the two pilots of the new Boeing warplane B-17 were killed in the crash of the aircraft. The crash was caused by an oversight of the pilots, who forgot to release a lock during the takeoff procedure. Since then, following the checklist during flight operations is mandatory and reduced the number of accidents. During the Apollo 13 mission of 1970, carefully written checklists mitigated the oxygen tank explosion accident.

In healthcare, checklists are widespread too. For example, the World Health Organization released a checklist outlining the required steps before, during, and after a surgery. A meta-analysis suggested that using the checklist was associated with mortality and complication rates reduction.

Because artificial intelligence is used for increasingly important matters, accidents can have important consequences. During a test, a chatbot suggested harmful behaviors to fake patients. The data scientists explained that the AI had no scientific or medical expertise. AI in law enforcement can also cause serious trouble. For example, a facial recognition software mistakenly identified a criminal, resulting in the arrest of an innocent, an algorithm used to determine the likelihood of crime recidivism was judged unfair towards black defendants. AI is also used in healthcare where a simulation of the Covid-19 outbreak in the United Kingdom shaped policy and led to a nation-wide lockdown. However, the AI simulation was badly programmed, causing serious issues. Root cause analysis determined that the simulation was not deterministic and badly tested. The lack of checklist could have played a role.

Just like in the aforementioned complex and critical industries, checklists should be leveraged to make sure the building and reporting of AI models includes everything required to reproduce results and make an informed judgement, which fosters trust in the AI accuracy. However, checklists for building prediction models like TRIPOD are not often used by data scientists, even though they might help. Possible reasons might be ignorance about the existence of such checklists, perceived lack of usefulness or misunderstandings caused by the use of different vocabularies among AI developers.

Enforcing the use of standardized checklists would lead to better idioms and practices, thereby fostering fair and accurate AI with a robust evaluation, making its adoption easier for sensitive tasks. In particular, a checklist on AI should include points about how the training dataset was constructed and preprocessed, the model specification and architecture and how its efficiency, accuracy and fairness were assessed. A list of all intended purposes of the AI should also be disclosed, as well as known risks and limitations.

As a novel field, one can understand why checklists are not widely used for AI. However, they are used in other fields for known reasons, and taking notes from past mistakes and ideas would be great this time.

Saturday, 01 May 2021

systemd in WSLv2

I am using archlinux in my WSL for the last two (2) years and the whole experience is quite smooth. I wanted to test native docker will run within WSL and not with the windows docker/container service, so I installed docker. My main purpose is building packages so (for now) I do not need networking/routes or anything else.

WSL

ebal@myworklaptop:~$ uname -a
Linux myworklaptop 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 x86_64 GNU/Linux

ebal@myworklaptop:~$ cat /etc/os-release
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://www.archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
LOGO=archlinux

Docker Install

$ sudo pacman -S docker

$ sudo pacman -Q docker
docker 1:20.10.6-1

$ sudo dockerd -v
Docker version 20.10.6, build 8728dd246c

Run docker

sudo dockerd -D

-D is for debug

and now pull an alpine image

ebal@myworklaptop:~$ docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

ebal@myworklaptop:~$ docker pull alpine:latest
latest: Pulling from library/alpine
540db60ca938: Pull complete
Digest: sha256:69e70a79f2d41ab5d637de98c1e0b055206ba40a8145e7bddb55ccc04e13cf8f
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest

ebal@myworklaptop:~$ docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
alpine       latest    6dbb9cc54074   2 weeks ago   5.61MB

Test alpine image

docker run -ti alpine:latest ash

perform a simple update

# apk update

fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.5-71-gfcabe3349a [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.5-65-g28e7396caa [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]

OK: 13887 distinct packages available

okay, seems that it is working.

Genie Systemd

as many already know, we can not run systemd inside WSL, at least not by default. So here comes genie !

A quick way into a systemd “bottle” for WSL

WSLv2 ONLY.

wsl.exe -l -v

  NAME            STATE           VERSION
* Archlinux       Running         2
  Ubuntu-20.04    Stopped         1

It will work on my arch.

Install Genie

genie comes by default with an archlinux artifact from github

curl -sLO https://github.com/arkane-systems/genie/releases/download/v1.40/genie-systemd-1.40-1-x86_64.pkg.tar.zst

sudo pacman -U genie-systemd-1.40-1-x86_64.pkg.tar.zst
$ pacman -Q genie-systemd
genie-systemd 1.40-1

daemonize

Genie has a dependency of daemonize.

In Archlinux, you can find the latest version of daemonize here:

https://gitlab.com/archlinux_build/daemonize

$ sudo pacman -U daemonize-1.7.8-1-x86_64.pkg.tar.zst

loading packages...
resolving dependencies...
looking for conflicting packages...

Package (1)  New Version  Net Change
daemonize    1.7.8-1        0.03 MiB
Total Installed Size:  0.03 MiB

:: Proceed with installation? [Y/n] y
$ pacman -Q daemonize
daemonize 1.7.8-1

Is genie running ?

We can start as-root genie with a new shell:

# genie --version
1.40

# genie -s
Waiting for systemd....!

# genie -r
running

okay !

Windows Terminal

In order to use systemd-genie by default, we need to run our WSL Archlinux with an initial command.

I use aka.ms/terminal to start/open my WSLv2 Archlinux so I had to edit the “Command Line” Option to this:

wsl.exe -d Archlinux genie -s

You can also verify that your WSL distro is down, with this

wsl.exe --shutdown

then fire up a new WSL distro!

wslv2-systemd.png

Systemd Service

you can enable & start docker service unit,

$ sudo systemctl enable docker

$ sudo systemctl start  docker

so next time will auto-start:

$ docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
alpine       latest    6dbb9cc54074   2 weeks ago   5.61MB

systemd

$ ps -e fuwwww | grep -i systemd
root           1  0.1  0.2  21096 10748 ?        Ss   14:27   0:00 systemd
root          29  0.0  0.2  30472 11168 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-journald
root          38  0.0  0.1  25672  7684 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-udevd
dbus          63  0.0  0.1  12052  5736 ?        Ss   14:27   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root          65  0.0  0.1  14600  7224 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-logind
root         211  0.0  0.1  14176  6872 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-machined
ebal         312  0.0  0.0   3164   808 pts/1    S+   14:30   0:00  _ grep -i systemd
ebal         215  0.0  0.2  16036  8956 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd --user

$ systemctl status docker
* docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-05-01 14:27:12 EEST; 3min 22s ago
TriggeredBy: * docker.socket
       Docs: https://docs.docker.com
   Main PID: 64 (dockerd)
      Tasks: 17 (limit: 4715)
     Memory: 167.9M
     CGroup: /system.slice/docker.service
             |-64 /usr/bin/dockerd -H fd://
             `-80 containerd --config /var/run/docker/containerd/containerd.toml --log-level info

May 01 14:27:12 myworklaptop-wsl systemd[1]: Started Docker Application Container Engine.
May 01 14:27:12 myworklaptop-wsl dockerd[64]: time="2021-05-01T14:27:12.303580300+03:00" level=info msg="AP>

that’s it !

Sunday, 18 April 2021

KDE Gear 21.04 is coming this week! But what is KDE Gear?

Let's dig a bit in our history.


In the "good old days" (TM) there was KDE, life was simple, everything we did was KDE and everything we released was KDE [*]

 

Then at some point we realized we wanted to release some stuff with different frequency, so KDE Extragear[**] was born.


Then we said "KDE is the community" so we couldn't release KDE anymore, thus we said "ok, the thing we release with all the stuff that releases at the same time will be KDE Software Compilation", which i think we all agree it was not an awesome name, but "names are hard" (TM) (this whole blog is about that :D)


We went on like that for a while, but then we realized we wanted different schedules for the things that were inside the KDE Software Compilation. 

 

We thought it made sense for the core libraries to be released monthly and the Plasma team also wanted to have it's own release schedule (has been tweaked over the years).


That meant that "KDE Frameworks" and "Plasma" (of KDE Plasma) names as things we release were born (Plasma was already a name used before, so that one was easy). The problem was that we had to find a name for "KDE Software Compilation" minus "KDE Frameworks" minus "Plasma".


One option would have been to keep calling it "KDE Software Compilation", but we thought it would be confusing to keep the name but make it contain lots of less things so we used the un-imaginative name (which as far as i remember i proposed) "KDE Applications"


And we released "KDE Applications" for a long time, but you know what, "KDE Applications" is not a good name either. First reason "KDE Applications" was not only applications, it also contained libraries, but that's ok, no one would have really cared if that was the only problem. The important issue was that if you call something "KDE Applications" you make it seem like these are all the applications KDE releases, but no, that's not the truth, remember our old friend KDE Extragear independently released applications?


So we sat down in the Akademy 2019 in Milan and tried to find a better name. And we couldn't. So we all said let's go with the "there's no spoon" route, you don't need a name if you don't have a thing. We basically de-branded the whole thing. The logic was that after all it's just a bunch of applications that are released at the same time because it makes things super easy from a release engineering point of view, but Okular doesn't "have anything to do" with Dolphin nor with krdc nor with kpat, they just happen to be released at the good time.


So we kept the release engineering side under the boring and non-capitalized name of "release service" and we patted ourselves on the back for having solved a decade long problem.


Narrator voice: "they didn't solve the problem"


After a few releases it became clear that our promotion people were having some trouble writing announcements, because "Dolphin, Okular, Krdc, kpat .. 100 app names.. is released" doesn't really sell very well.


Since promotion is important we sat down again and did some more thinking, ok we need a name, but it can't be a name that is "too specific" about applications because otherwise it will have the problem of "KDE Applications". So it had to be a bit generic, at some point, i jokingly suggested "KDE Gear", tied with our logo and with our old friend that would we have almost killed by now "KDE Extragear"


Narrator voice: "they did not realize it was a joke"

 

And people liked "KDE Gear", so yeah, this week we're releasing KDE Gear 21.04 whose heritage can be traced to "release service 21.04", "KDE Applications 21.04", "KDE Software Compilation 21.04" and "KDE 21.04" [***]


P.S: Lots of these decisions happened long time ago, so my recollection, specially my involvement in the suggestion of the names, may not be as accurate as i think it is.


[*] May not be an accurate depiction, I wasn't around in the "good old days"

[**] A term we've been killing over the last years, because the term "extra" implied to some degree this were not important things, and they are totally important, the only difference is that they are released on their own, so personally i try to use something like "independently released"

[***] it'd be great if you could stop calling the things we release as "KDE", we haven't used that name for  releases of code for more than a decade now

Linux bluetooth HeadSet Audio HSP/HFP WH-1000XM3

I am an archlinux user using Sony WH-1000XM3 bluetooth noise-cancellation headphones. I am also using pulseaudio and it took me a while to switch the bluetooth headphones to HSP/HFP profile so the microphone can work too. Switching the bluetooth profile of your headphones to HeadSet Audio works but it is only monophonic audio and without noise-cancellation and I had to switch to piperwire also. But at least now the microphone works!

I was wondering how distros that by default have already switched to pipewire deal with this situation. So I started a fedora 34 (beta) edition and attached both my bluetooth adapter TP-LINK UB400 v1 and my web camera Logitech HD Webcam C270.

The test should be to open a jitsi meet and a zoom test meeting and verify that my headphones can work without me doing any stranger CLI magic.

tldr; works out of the box !

lsusb

[root@fedora ~]# lsusb 

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 004: ID 046d:0825 Logitech, Inc. Webcam C270
Bus 001 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode)
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

as you can see both usb devices have properly attached to fedora34

kernel

we need Linux kernel > 5.10.x to have a proper support

[root@fedora ~]# uname -a
Linux fedora 5.11.10-300.fc34.x86_64 #1 SMP Thu Mar 25 14:03:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux 

pipewire

and of-course piperwire installed


[root@fedora ~]# rpm -qa | grep -Ei 'blue|pipe|pulse' 

libpipeline-1.5.3-2.fc34.x86_64
pulseaudio-libs-14.2-3.fc34.x86_64
pulseaudio-libs-glib2-14.2-3.fc34.x86_64
pipewire0.2-libs-0.2.7-5.fc34.x86_64
bluez-libs-5.56-4.fc34.x86_64
pipewire-libs-0.3.24-4.fc34.x86_64
pipewire-0.3.24-4.fc34.x86_64
bluez-5.56-4.fc34.x86_64
bluez-obexd-5.56-4.fc34.x86_64
pipewire-gstreamer-0.3.24-4.fc34.x86_64
pipewire-pulseaudio-0.3.24-4.fc34.x86_64
gnome-bluetooth-libs-3.34.5-1.fc34.x86_64
gnome-bluetooth-3.34.5-1.fc34.x86_64
bluez-cups-5.56-4.fc34.x86_64
NetworkManager-bluetooth-1.30.2-1.fc34.x86_64
pipewire-alsa-0.3.24-4.fc34.x86_64
pipewire-jack-audio-connection-kit-0.3.24-4.fc34.x86_64
pipewire-utils-0.3.24-4.fc34.x86_64

screenshots

01f3420210418

02f3420210418

03f3420210418

Bluetooth Profiles

04f3420210418

Online Meetings

05f3420210418

06f3420210418

Monday, 12 April 2021

Submit your talks now for Akademy 2021!

As you can see in https://akademy.kde.org/2021/cfp the Call for Participation for Akademy 2021 (that will take place online from Friday the 18th to Friday the 25th of June) is already open.

 

You have until Sunday the 2nd of May 2021 23:59 UTC to submit your proposals but you will make our (talks committee) live much easier if you start sending the proposals *now* and not send them all last minute in 2 weeks ;)


I promise i'll buy you a $preferred_beverage$ for next years Akademy (which we're hoping will happen live) if you send a talk before end of this week (and you send me a mail about it)


Saturday, 03 April 2021

Why Signature Verification in OpenPGP is hard

An Enigma cipher machine which is probably easier to understand than OpenPGP
An Enigma cipher machine which is less secure but also easier to understand than OpenPGP.
Photo by Mauro Sbicego on Unsplash.

When I first thought about signature verification in OpenPGP I thought “well, it cannot be that hard, right?”. In the end all you got to do is check if a signature was made by the given key and if that signature checks out (is cryptographically correct). Oh boy, was I wrong.

The first major realization that struck me was that there are more than just two factors involved in the signature verification process. While the first two are pretty obvious – the signature itself and the key that created it – another major factor is played by the point in time at which a signature is being verified. OpenPGP keys are changing over the course of their lifespan. Subkeys may expire or be revoked for different reasons. A subkey might be eligible to create valid signatures until its binding signature expires. From that point in time all new signatures created with that key must be considered invalid. Keys can be rebound, so an expired key might become valid again at some point, which would also make (new) signatures created with it valid once again.

But what does it mean to rebind a key? How are expiration dates set on keys and what role plays the reason of a revocation?

The answer to the first two questions is – Signatures!

OpenPGP keys consist of a set of keys and subkeys, user-id packets (which contain email addresses and names and so forth) and lastly a bunch of signatures which tie all this information together. The root of this bunch is the primary key – a key with the ability to create signatures, or rather certifications. Signatures and certifications are basically the same, they just have a different semantic meaning, which is quite an important detail. More to that later.

The main use of the primary key is to bind additional data to it. First and foremost user-id packets, which can (along with the primary keys key-ID) be used to identify the key. Alice might for example have a user-id packet on her key which contains her name and email address. Keys can have more than one user-id, so Alice might also have an additional user-id packet with her work-email or her chat address added to the key.

But simply adding the packet to the key is not enough. An attacker might simply take her key, change the email address and hand the modified key to Bob, right? Wrong. Signatures Certifications to the rescue!

   Primary-Key
      [Revocation Self Signature]
      [Direct Key Signature...]
      [User ID [Signature ...] ...]
      [User Attribute [Signature ...] ...]
      [[Subkey [Binding-Signature-Revocation]
              Subkey-Binding-Signature ...] ...]

Information is not just loosely appended to the key. Instead it is cryptographically bound to it by the help of a certification. Certifications are signatures which can only be created by a key which is allowed to create certifications. If you take a look at any OpenPGP (v4) key, you will see that most likely every single primary key will be able to create certifications. So basically the primary key is used to certify that a piece of information belongs to the key.

The same goes for subkeys. They are also bound to the primary key with the help of a certification. Here, the certification has a special type and is called “subkey binding signature”. The concept though is mostly the same. The primary key certifies that a subkey belongs to it by help of a signature.

Now it slowly becomes complicated. As you can see, up to this point the binding relations have been uni-directional. The primary key claims to be the dominant part of the relation. This might however introduce the risk of an attacker using a primary key to claim ownership of a subkey which was used to make a signature over some data. It would then appear as if the attacker is also the owner of that signature. That’s the reason why a signing-capable subkey must somehow prove that it belongs to its primary key. Again, signatures to the rescue! A subkey binding signature that binds a signing capable subkey MUST contain a primary key binding signature made by the subkey over the primary key. Now the relationship is bidirectional and attacks such as the one mentioned above are mitigated.

So, about certifications – when is a key allowed to create certifications? How can we specify what capabilities a key has?

The answer are Signature… Subpackets!
Those are some pieces of information that are added into a signature that give it more semantic meaning aside from the signature type. Examples for signature subpackets are key flags, signature/key creation/expiration times, preferred algorithms and many more. Those subpackets can reside in two areas of the signature. The unhashed area is not covered by the signature itself, so here packets can be added/removed without breaking the signature. The hashed area on the other hand gets its name from the fact that subpackets placed here are taken into consideration when the signature is being calculated. They cannot be modified without invalidating the signature.

So the unhashed area shall only contain advisory information or subpackets which are “self-authenticating” (meaning information which is validated as a side-effect of validating the signature). An example of a self-authenticating subpacket would be the issuers-id packet, which contains the key-id of the key that created the signature. This piece of information can be verified by checking if the denominated key really created the signature. There is no need to cover this information by the signature itself.

Another really important subpacket type is the key flags packet. It contains a bit-mask that declares what purpose a key can be used for, or rather what purpose the key is ALLOWED to be used for. Such purposes are encryption of data at rest, encryption of data in transit, signing data, certifying data, authentication. Additionally there are key flags indicating that a key has been split by a key-splitting mechanism or that a key is being shared by more than one entity.

Each signature MUST contain a signature creation time subpacket, which states at which data and time a signature was created. Optionally a signature might contain a signature expiration time subpacket which denotes at which point in time a signature expires and becomes invalid. So far so good.

Now, those subpackets can also be placed on certifications, eg. subkey binding signatures. If a subkey binding signature contains a key expiration time subpacket, this indicates that the subkey expires at a certain point in time. An expired subkey must not be used anymore and signatures created by it after it has been expired must be considered invalid. It gets even more complicated if you consider, that a subkey binding signature might contain a key expiration time subpacket, along with a signature expiration time subpacket. That could lead to funny situations. For example a subkey might have two subkey binding signatures. One simply binds the key indefinitely, while the second one has an expiration time. Here the latest binding signature takes precedence, meaning the subkey might expire at lets say t+3, while at t+5 the signature itself expires, meaning that the key regains validity, as now the former binding signature is “active” again.

Not yet complicated enough? Consider this: Whether or not a key is eligible to create signatures is denoted by the key flags subpacket which again is placed in a signature. So when verifying a signature, you have to consult self-signatures on the signing key to see if it carries the sign-data key flag. Furthermore you have to validate that self-signature and check if it was created by a key carrying the certify-other key flag. Now again you have to check if that signature was created by a key carrying the certify-other key flag (given it is not the same (primary) key). Whew.

Lastly there are key revocations. If a key gets lost or stolen or is simply retired, it can be revoked. Now it depends on the revocation reason, what impact the revocation has on past and/or future signatures. If the key was revoked using a “soft” revocation reason (key has not been compromised), the revocation is mostly handled as if it were an expiration. Past signatures are still good, but the key must no longer be used anymore. If it however has a “hard” revocation reason (or no reason at all) this could mean that the key has been lost or compromised. This means that any signature (future and past) that was made by this key has now to be considered invalid, since an attacker might have forged it.

Now, a revocation can only be created by a certification capable key, so in order to check if a revocation is valid, we have to check if the revoking key is allowed to revoke this specific subkey. Permitted revocation keys are either the primary key, or an external key denoted in a revocation key subpacket on a self-signature. Can you see why this introduces complexity?

Revocation signatures have to be handled differently from other signatures, since if the primary key is revoked, is it eligible to create revocation signatures in the first place? What if an external revocation key has been revoked and is now used to revoke another key?

I believe the correct way to tackle signature validity is to first evaluate the key (primary and subkeys) at signature creation time. Evaluating the key at a given point in time tn means we reject all signatures made after tn (except hard revocations) as those are not yet valid. Furthermore we reject all signatures that are expired a tn as those are no longer valid. Furthermore we remove all signatures that are superseded by another more recent signature. We do this for all signatures on all keys in the “correct” order. What we are left with is a canonicalized key ring, which we can now use to verify the signature in question with.

So lets try to summarize every step that we have to take in order to verify a signatures validity.

  • First we have to check if the signature contains a creation time subpacket. If it does not, we can already reject it.
  • Next we check if the signature is expired by now. If it is, we can again reject.
  • Now we have to evaluate the key ring that contains the signatures signing key at the time at which the signature was created.
    • Is the signing key properly bound to the key ring?
      • Was is created before the signature?
      • Was it bound to the key ring before the signature was made?
      • Is the binding signature not expired?
      • Is the binding signature not revoked?
      • Is the subkey binding signature carrying a valid primary key binding signature?
      • Are the binding signatures using acceptable algorithms?
    • Is the subkey itself not expired?
    • Is the primary key not expired?
    • Is the primary key not revoked?
    • Is the subkey not revoked?
  • Is the signing key capable of creating signatures?
  • Was the signature created with acceptable algorithms? Reject weak algorithms like SHA-1.
  • Is the signature correct?

Lastly of course, the user has to decide if the signing key is trustworthy or not, but luckily we can leave this decision up to the user.

As you can see, this is not at all trivial and I’m sure I missed some steps and/or odd edge cases. What makes implementing this even harder is that the specification is deliberately sparse in places. What subpackets are allowed to be placed in the unhashed area? What MUST be placed in the hashed area instead? Furthermore the specification contains errors which make it even harder to get a good picture of what is allowed and what isn’t. I believe what OpenPGP needs is a document that acts as a guide to implementors. That guide needs to specify where and where not certain subpackets are to be expected, how a certain piece of semantic meaning can be represented and how signature verification is to be conducted. It is not desirable that each and every implementor has to digest the whole specification multiple times in order to understand what steps are necessary to verify a signature or to select a valid key for signature creation.

Did you implement signature verification in OpenPGP? What are your thoughts on this? Did you go through the same struggles that I do?

Lastly I want to give a shout-out to the devs of Sequoia-PGP, which have a pretty awesome test suite that covers lots and lots of edge-cases and interoperability concerns of implementations. I definitely recommend everyone who needs to work with OpenPGP to throw their implementation against the suite to see if there are any shortcomings and problems with it.

Thursday, 01 April 2021

Microphone settings - how to deactivate webcam microphone

As many people out, in the last months I have participated in more remote video conferences than most likely in my whole life before. In those months I improved the audio and video hardware, the lighting, learned new shortcuts for those tools, and in general tried to optimise a few parts.

One of the problems I encountered on this journey was the selection of the correct microphone. The webcam, which luckily I got already before the pandemic, has an integrated microphone. The sound quality is ok. But compared to the microphone in the new headset the sound quality is awful. The problem was that whenever I plugged-in the webcam, its microphone will be selected as the new default. So I had to manually change the sound setting every time I plug and unplug the webcam.

I asked about that problem on mastodon (automatically forwarded to this proprietary microblogging service). There were several suggestions how to fix that. In the end I decided to use PulseAudio Volume Control, as I thought that is also a solution other people around me can easily implement. There under the Configuration tab you can switch the Profile for the webcam to Off as seen in the screenshot.

Screenshot of apps in work profile

This way I can plug in and unplug the webcam without the need to always switch the audio microphone settings. Saves quite some time while having such a high amount of video calls every day.

Thanks a lot to all who provided suggestions how to fix this problem, and to Lennart Poettering for writing PulseAudio Volume Control and publishing it under a Free Software license. Hopefully in future, such settings about the default sound and video hardware, are included directly in the general settings of the desktop environment as well.

Wednesday, 31 March 2021

MotionPhoto / MicroVideo File Formats on Pixel Phones

  • Losca
  • 11:24, Wednesday, 31 March 2021

Google Pixel phones support what they call ”Motion Photo” which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.

I’d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.

Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as ”MVIMG_[datetime].jpg", and they have the following meta-data:

Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607

The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:

#!/bin/bash
#
# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" | grep MicroVideoOffset | sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

The newer format is recorded in filenames called ”PXL_[datetime].MP.jpg”, and they have a _lot_ of additional metadata:

Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0

Sounds like fun and lots of information. However I didn’t see why the “length” in first item is 0 and I didn’t see how to use the latter Length info. But I can use the mp4 headers to extract it:

#!/bin/bash
#
# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the ”Length” is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I’ll leave the above as is however for the ❤️ of binary grepping.

(cross-posted to my other blog)

What's in a name?

Often when people conceptualise transgender people, there is a misery inherent to our identity. There is the everyday discrimination, gender dysphoria, the arduous road of transition, and the sort of identity crisis that occurs when we let go of one name and choose another. And while being transgender certainly can be all of that, it’s a pity that the joyous aspects are often forgotten, or disappear behind all the negative clouds that more desperately require society’s attention. For this International Transgender Day of Visibility, I want to talk about those happy things. I want to talk about names, introspection, and the mutability of people.

Sometimes I mention to people that I have chosen my own name. In response, people often look at me as though I had just uttered an impossibility. What? Why would you? How would you do that? Can you even do that? I find this a little funny because it’s the most normal thing for me, but a never-even-thought-about-that for the vast majority of people. Such an unlikely thing, in fact, that some people consequently ask me for my real name. A self-selected name cannot be a real name, right?

But I want to make a case for changing one’s name, even if you’re not trans.

Scrooge

Imagine Scrooge from A Christmas Carol. In your mind’s eye, you will likely picture him as he was introduced to you: “a squeezing, wrenching, grasping, scraping, clutching, covetous, old sinner! Hard and sharp as flint, from which no steel had ever struck out generous fire; secret, and self-contained, and solitary as an oyster.”

But that Scrooge—the Scrooge which everyone knows—is the man from the beginning of the novel. By the end of the novel, Scrooge has changed into a compassionate man who wishes joy to humanity and donates his money to the poor. And even though everyone knows the story, almost no one thinks “good” when they hear the name Scrooge.

I don’t really know why people can’t let go of the image of Scrooge. Maybe his awfulness was so strong that one can’t just forget about it. Maybe we haven’t spent enough time with the new Scrooge yet. Maybe his name simply sounds essentially hideous, and whatever he did to deserve his reputation doesn’t even matter.

Or maybe his name has become akin to a meme for stingy people, similar to today’s “Chad” for handsome, popular, probably not-so-smart men, or “Karen” for vile, selfish, bourgeois women (although that meme quickly devolved into a sexist term meaning “any woman I don’t like”, but whatever).

All names are memes

Ultimately, all names are memes. They evoke diverse feelings from their related clichés. The clichés can relate to country, region, language, age, gender, class, or any other arbitrary thing, and any combination thereof.

The fact that names are memes likely isn’t a great thing—all the above assumptions can be harmful in their own ways—but it’s also probably unavoidable. If most people with the name Thijs are Dutch men, then it follows that people are going to take note of this. And sometimes, someone becomes famous enough that they become the prime instance of their name: it’s difficult to imagine a Napoléon who isn’t the 19th-century leader of France.

More interestingly, though, we ourselves memeify our names through existence. If you’re the only person with a certain name inside of a community, then all members of that community will subconsciously base their associations with that name on you. You effectively become the prime instance of that name within your community.

Memes don’t change; people do

A trait of memes—certainly popular ones—is that they are incredibly well-polished. Like a Platonic Form, a meme embodies the essence of something very specific at the intersection of all of its instances. Because of this, memes very rarely change in their meanings. So what to do when an instance of the meme changes?

Although the journeys of trans people all wildly vary, I’ve yet to meet a trans person whose journey did not include an almost unbearable amount of introspection. A deep investigation of the self, not just as who you are, but who you want to be. And inevitably, you end up asking yourself this question: “If you could be anybody at all, who would you want to be?”

Invariably the answer is something along the lines of “myself, but different”. For trans people, this involves a change in gender presentation, and society mandates that people who present a certain gender should have a name that reflects that. So we choose our own name. With any luck, we choose a cool name.

But what if we extended that line of thinking? Going through a period of introspection and coming out of it a different person is not something that is entirely unique to trans people. We ultimately only get to occupy a single body in this life, so we might as well make that person resemble the kind of person we would really fancy being. So why not change your name? Get rid of the old meme, and start a new one.

Now, understandably, we cannot change into anybody at all. There are limits to our mutability, although those limits are often broader than our imagination. Furthermore, we may never become the perfect person we envision when we close our eyes. And that’s okay, but we can get a little closer. And for me, the awareness of the mutability of names excites the awareness of the mutability—and consequently the potential for improvement—of people.

Happy International Transgender Day of Visibility.

Sunday, 28 March 2021

Saturday, 13 March 2021

KDE Gear 21.04 releases branches created

 Make sure you commit anything you want to end up in the 21.04 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 18 of March.

More interesting dates
   April  8: 21.04 RC (21.03.90) Tagging and Release
   April 15: 21.04 Tagging
   April 22: 21.04 Release

https://community.kde.org/Schedules/KDE_Gear_21.04_Schedule

Okular: Should continuous view be an okular setting or a document setting?

In Okular:

 

Some settings are okular wide, if you change them, they will be changed in all the future okular instances, an easy example is if you change the shortcut for saving from Ctrl+S to Ctrl+Shift+E. 

 

Some other settings are document specific, for example zoom, if you change the zoom of a document it will only be restored when opening the same document again, but not if you open a different one. There's also a "default zoom value for documents you've never opened before" in the settings.


Some other settings like "Continuous View" are a bit of a mess and are both. "Continuous View" wants to be a global setting (i.e. so that if you hate continuous view you always get a non continous view) but it is also restored to the status it had when you closed the document you're just opening.


That's understandably a bit confusing for users :D


My suggestion for continuous view would be to make it work like Zoom, be purely document specific but also have a default option in the settings dialog for people that hate continous view.

 

I'm guessing this should cover all our bases?

 

Opinions? Anything i may have missed?

Thursday, 04 March 2021

Is okular-devel mailing list the correct way to reach the Okular developers? If not what do we use?

After my recent failure of gaining traction to get people to join a potential Okular Virtual Sprint i wondered, is the okular-devel mailing list representative of the current okular contributors?

 

Looking at the sheer number of subscribers one would think that probably. There's currently 128 people subscribed to the okular-devel mailing list, and we definitely don't have that many contributors, so it would seem the mailing list is a good place to reach all the contributors, but let's look at the actual numbers.

 

Okular git repo has had 46 people contributing code[*] in the last year.


Only 17% of those are subscribed to the okular-devel mailing list.


If we count commits instead of commiters, the number raises to 65% but that's just because I account for more than 50% of the commits, if you remove myself from the equation the number drops to 28%.


If we don't count people that only commited once (thinking that they may not be really interested in the project), the number is still at only 25% of commiters and 30% of commits (ignoring me again) subscribed to the mailing list.


So it would seem that the answer is leaning towards "no, i can't use okular-devel to contact the okular developers".


But if not the mailing list? What am i supposed to use? I don't see any other method that would be better.


Suggestions welcome!



[*] Yes I'm limiting contributors to git commiters at this point, it's the only thing i can easily count, i understand there's more contributions than code contributions

Saturday, 20 February 2021

How to build your own dyndns with PowerDNS

I upgraded my home internet connection and as a result I had to give up my ~15y Static IP. Having an ephemeral Dynamic IP means I need to use a dynamic dns service to access my homepc. Although the ISP’s CPE (router) has a few public dynamic dns services, I chose to create a simple solution on my own self-hosted DNS infra.

There are a couple of ways to do that, PowerDNS supports Dynamic Updates but I do not want to open PowerDNS to the internet for this kind of operations. I just want to use cron with a simple curl over https.

PowerDNS WebAPI

to enable and use the Built-in Webserver and HTTP API we need to update our configuration:

/etc/pdns/pdns.conf

api-key=0123456789ABCDEF
api=yes

and restart powerdns auth server.

verify it

ss -tnl 'sport = :8081'
State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
LISTEN      0       10      127.0.0.1:8081             *:*

WebServer API in PHP

Next to build our API in PHP

Basic Auth

By using https means that the transport layer is encrypted so we only need to create a basic auth mechanism.

<?php
  if ( !isset($_SERVER["PHP_AUTH_USER"]) ) {
      header("WWW-Authenticate: Basic realm='My Realm'");
      header("HTTP/1.0 401 Unauthorized");
      echo "Restricted area: Only Authorized Personnel Are Allowed to Enter This Area";
      exit;
  } else {
    // code goes here
  }
?>

by sending Basic Auth headers, the _SERVER php array variable will contain two extra variables

$_SERVER["PHP_AUTH_USER"]
$_SERVER["PHP_AUTH_PW"]

We do not need to setup an external IDM/LDAP or any other user management system just for this usecase (single user access).

and we can use something like:

<?php
  if (($_SERVER["PHP_AUTH_USER"] == "username") && ($_SERVER["PHP_AUTH_PW"] == "very_secret_password")){
    // code goes here
  }
?>

RRSet Object

We need to create the RRSet Object

here is a simple example

<?php
  $comments = array(
  );

  $record = array(
      array(
          "disabled"  => False,
          "content"   => $_SERVER["REMOTE_ADDR"]
      )
  );

  $rrsets = array(
      array(
          "name"          => "dyndns.example.org.",
          "type"          => "A",
          "ttl"           => 60,
          "changetype"    => "REPLACE",
          "records"       => $record,
          "comments"      => $comments
      )
  );

  $data = array (
      "rrsets" => $rrsets
  );

?>

by running this data set to json_encode should return something like this

{
  "rrsets": [
    {
      "changetype": "REPLACE",
      "comments": [],
      "name": "dyndns.example.org.",
      "records": [
        {
          "content": "1.2.3.4",
          "disabled": false
        }
      ],
      "ttl": 60,
      "type": "A"
    }
  ]
}

be sure to verify that records, comments and rrsets are also arrays !

Stream Context

Next thing to create our stream context

$API_TOKEN = "0123456789ABCDEF";
$URL = "http://127.0.0.1:8081/api/v1/servers/localhost/zones/example.org";

$stream_options = array(
    "http" => array(
        "method"    => "PATCH",
        "header"    => "Content-type: application/json \r\n" .
                        "X-API-Key: $API_TOKEN",
        "content"   => json_encode($data),
        "timeout"   => 3
    )
);

$context = stream_context_create($stream_options);

Be aware of " \r\n" . in header field, this took me more time than it should ! To have multiple header fiels into the http stream, you need (I don’t know why) to carriage return them.

Get Zone details

Before continue, let’s make a small script to verify that we can successfully talk to the PowerDNS HTTP API with php

<?php
  $API_TOKEN = "0123456789ABCDEF";
  $URL = "http://127.0.0.1:8081/api/v1/servers/localhost/zones/example.org";

  $stream_options = array(
      "http" => array(
          "method"    => "GET",
          "header"    => "Content-type: application/jsonrn".
                          "X-API-Key: $API_TOKEN"
      )
  );

  $context = stream_context_create($stream_options);

  echo file_get_contents($URL, false, $context);
?>

by running this:

php get.php | jq .

we should get the records of our zone in json format.

Cron Entry

you should be able to put the entire codebase together by now, so let’s work on the last component of our self-hosted dynamic dns server, how to update our record via curl

curl -sL https://username:very_secret_password@example.org/dyndns.php

every minute should do the trick

# dyndns
* * * * * curl -sL https://username:very_secret_password@example.org/dyndns.php

That’s it !

Tag(s): php, curl, dyndns, PowerDNS

Panel - Free Software development for the public administration

On 17 February I participated in a panel discussion about opportunities, hurdles with and incentives for Free Software in the public administration. The panel was part of the event "digital state online", focusing on topics like digital administration, digital society and digital sovereignty. Patron of the event is the German Federal Chancellor's Office. Here a quick summary of the points I made and some quick thoughts about the discussion.

The "Behördenspiegel" meanwhile published the recordings of the discussion moderated by Benjamin Stiebel (Editor, Behörden Spiegel) with Dr. Hartmut Schubert (State Secretary, Thuringian Ministry of Finance), Reiner Bamberger (developer at Higher Administrative Court of Rhineland-Palatinate), Dr. Matthias Stürmer (University of Bern), and myself (as always you can use youtube-dl).

We were asked to make a short 2 minutes statement at the beginning in which I focused on three theses:

  • The public administration actions are more intransparent than it used to be due to digitalization. We need to balance this.
  • The sharing and re-use of software in public administrations together with small and medium-sized enterprises must be better promoted.
  • Free Software (also called Open Source) in administration is important in order to maintain control over state action and Free Software is an important building block of a technical distribution of powers in a democracy in the 21st century.

Furthermore, the "Public Money? Public Code!" video was played:

In the discussion, we also talked about why there is not yet more Free Software in public administrations yet. State Secretary Dr. Schubert point was he is not aware about legal barriers and that the main problem is in the implementation phase (as it is the case for policies most of the time). I still mentioned few hurdles here:

  • Buying licences is still sometimes easier than buying services and some public administrations have budgets for licences which cannot be converted to buy services. This should be more flexible; and it was good to hear from State Secretary Dr. Schubert that they changed this in Thuringia.
  • Some proprietary vendors can simply be procured from framework contracts. For Free Software that option is often missing. It could be helpful if other governments follow the example of France and provide a framework contract which makes it easy to procure Free Software solutions - including from smaller and medium-sized companies.

One aspect I noticed in the discussion and the questions we received in the chat: Sometimes Free Software is presented in a way that in order to use it, public administration would have to look at code repositories and click through online forums in order to find out specifics of the software. Of course, they could do that, but they could - as they do for proprietary software as well and as it is the more common if you do not have in-house contributors - simply write in an invitation to tender that a solution must for example be data-protection compliant, that you want the rights to use, study, share, and improve the software for every purpose. So as public administration you not have to do such research yourself, as you would have to do for your private hobby project, but you can - and depending on the level of in-house expertise often really should - involve external professional support to implement a Free Software solution. This can be support from other public administrations or from companies providing Free Software solutions (on purpose I am not writing "Free Software companies" here, for further details see the previous article "There is no Free Software company - But!").

We also still need more statements by government officials, politicians, and other decisions makers why Free Software is important. Like in the recent months in Germany the conservative CDU's party convention resolution on the use of Free Software or the statement by the German Chancellor Merkel about Free Software below. This is important so that people in the public administration who want to move to more Free Software can better justify and defend their actions. In order to increase the speed for more digital sovereignty, decision makers need to reverse the situation. It should not be "nobody gets fired for buying Microsoft" to "nobody gets fired for procuring Free Software".

I also plead for a different error culture in the public administration. Experimentation clauses would allow to be test innovative approaches without every bad feedback immediately suggesting that a project has to be stopped. We should think about how to incentivize the sharing and reuse of Free Software. For example if public administrations document good solutions and support others in benefiting from those solutions as well could they get a budget bonus for that for future projects? Could we provide smaller budgets which can be more flexible used to experiment with Free Software, e.g. by providing small payments to Free Software offers even if they do not yet meet all the criteria to use it productively for the tasks envisioned.

One point we also briefly talked about was centralization vs decentralization. We have to be careful that "IT consolidation" efforts do not lead to a situation of more monopolies and more centralization of powers. For Germany, I argued that the bundling of IT services and expertise in some authorities should not go that far that federal states like Thuringia or other levels and parts of government lose their sovereignty and are dependent on a service centre controlled by the federal government or another part of the state. Free Software provides the advantage that for example the federal state of Bavaria can offer a software solution for other federal states. But if they abuse their power over this technology, other federal states like Thuringa could decide to host the Free Software solution themselves, and contract a company to make modifications, so they can have it their way. The same applies for other mechanisms for distribution of power like the separation between a legislature, an executive, and a judiciary. All of them have to make sure their sovereignty is not impacted by technology - neither by companies (as more often discussed) nor by other branches of government. For a democracy in the 21st century such a technological distribution of power is crucial.

PS: In case you read German, Heise published an article titled "The public administration's dependency on Microsoft & Co is 'gigantic'" (in German) about several of the points from the discussion. And if you do not know it yet, have a look at the expert brochure to modernise public digital infrastructure with public code, currently available in English, German, Czech, and Brazilian Portuguese.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

          Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  English – nico.rikken’s blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nikos Roussos - opensource  Planet FSFE on irl.xyz  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Thoughts of a sysadmin (Posts about planet-fsfe)  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog