Free Software, Free Society!
Thoughts of the FSFE Community (English)

Friday, 11 June 2021

I'm back in the boat

In mid-2014 I first heard about Jolla and Sailfish OS and immediately bought a Jolla 1; wrote apps; participated in the IGG campaign for Jolla Tablet; bought the TOHKBD2; applied for (and got) Jolla C.

Sounds like the beginning of a good story doesn’t it?

Well, by the beginning of 2017 I had sold everything (except the tablet, we all know what happened to that one).

So what happened?? I was a happy Sailfish user, but Jolla’s false promises disappointed me.

Yet, despite all that, I still think about Sailfish OS to this day. I think it’s because, despite some proprietary components, the ecosystem around Sailfish OS is ultimately open source. And that’s what interests me. It also got a fresh update which solves some of the problems that where there 5 years ago.

Nowadays, thanks to the community, Sailfish OS can be installed on many devices, even if with some less components, but I’m looking for that complete experience and so I asked on the forum if there was someone willing to sell his Xperia device with or without the license… and I got one for free. Better still, in exchange for some apps!

To decide which applications to create, I therefore took a look at that ecosystem. I started with the apps I use daily on Android and looked for the Sailfish OS alternative (spoiler: I’m impressed, good job guys!).

I am writing them all here because I am sure it will be useful to someone else:

  • AntennaPod (podcast app) -> PodQast
  • Ariane (gemini protocol browser)
  • AsteroidOS (AsteroidOS sync) -> Starfish
  • Connectbot (ssh client) -> built-in (Terminal)
  • Conversation (xmpp client) -> built-in (Messaging)
  • Davx5 (caldav/cardav) -> built-in (Account)
  • DroidShows (TV series) -> SeriesFinale
  • Element (Matrix client) -> Determinant
  • Endoscope (camera stream)
  • Fedilab (Mastodon client) -> Tooter
  • ForkHub (GitHub client) -> SailHub
  • FOSS Browser -> built-in (Browser)
  • FreeOTP -> SailOTP
  • Glider (hacker news reader) -> SailHN
  • K-9 Mail -> built-in (Mail)
  • KDE Connect (KDE sync) -> SailfishConnect
  • Keepassx (password manager) -> ownKeepass
  • Labcoat (GitLab client)
  • Lemmur (Lemmy client)
  • MasterPassword (password manager) -> MPW
  • MuPDF (PDF reader) -> built-in (Documents)
  • Newpipe (YouTube client) -> YTPlayer
  • Nextcloud (Nextcloud files) -> GhostCloud
  • Notes (Nextcloud notes) -> Nextcloud Notes
  • OCReader (Nextcloud RSS) -> Fuoten
  • OsmAnd~ (Maps) -> PureMaps
  • Printing (built-in) -> SeaPrint
  • QuickDic (dictionary) -> Sidudict
  • RedMoon (screen color temperature) -> Tint Overlay
  • RedReader (Reddit client) -> Quickddit
  • Signal -> Whisperfish
  • Syncthing (files sync) -> there’s the binary, no UI
  • Transdroid (Trasmission client) -> Tremotes
  • Vinyl (music player) -> built-in (Mediaplayer)
  • VLC (NFS streaming) -> videoPlayer
  • WireGuard (VPN) -> there’s the binary, no UI
  • YetAnotherCallBlocker (call blocker) -> Phonehook

So, to me it looks like almost everything is there, except:

I’ve already started to write a UI for Syncthing, then maybe I could write the browser for the gemini protocol or rather the GitLab client?

Please consider a donation if you would like to support me (mention your favourite project!).

Liberapay

Many many thanks to Jörg who sent me his Sony Xperia 10 Plus! I hope I don’t disappoint him!

Wednesday, 09 June 2021

KDE Gear 21.08 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/KDE_Gear_21.08_Schedule

 
Dependency freeze is in four weeks (July 8) and Feature Freeze a week after that, make sure you start finishing your stuff!

Saturday, 05 June 2021

Deployed my blog on Kubernetes

One of the most well-known k8s memes is the below image that represent the effort and complexity on building a kubernetes cluster just to run a simple blog. So In this article, I will take the opportunity to install a simple blog engine on kubernetes using k3s!

k8s_blog.jpg

terraform - libvirt/qemu - ubuntu

For this demo, I will be workinig on my local test lab. A libvirt /qemu ubuntu 20.04 virtual machine via terraform. You can find my terraform notes on my github repo tf/0.15/libvirt/0.6.3/ubuntu/20.04.

k3s

k3s is a lightweight, fully compliant kubernetes distribution that can run on a virtual machine, single node.

login to your machine and became root

$ ssh 192.168.122.42 -l ubuntu
$ sudo -i
#

install k3s with one command

curl -sfL https://get.k3s.io | sh -

output should be something like this

[INFO]  Finding release for channel stable

[INFO]  Using v1.21.1+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.1+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.1+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Firewall Ports

I would propose to open the below network ports so k3s can run smoothly.

Inbound Rules for K3s Server Nodes

PROTOCOL PORT SOURCE DESCRIPTION
TCP 6443 K3s agent nodes Kubernetes API Server
UDP 8472 K3s server and agent nodes Required only for Flannel VXLAN
TCP 10250 K3s server and agent nodes Kubelet metrics
TCP 2379-2380 K3s server nodes Required only for HA with embedded etcd

Typically all outbound traffic is allowed.

ufw allow

ufw allow 6443/tcp
ufw allow 8472/udp
ufw allow 10250/tcp
ufw allow 2379/tcp
ufw allow 2380/tcp

full output

# ufw allow 6443/tcp
Rule added
Rule added (v6)

# ufw allow 8472/udp
Rule added
Rule added (v6)

# ufw allow 10250/tcp
Rule added
Rule added (v6)

# ufw allow 2379/tcp
Rule added
Rule added (v6)

# ufw allow 2380/tcp
Rule added
Rule added (v6)

k3s Nodes / Pods / Deployments

verify nodes, roles, pods and deployments

# kubectl get nodes -A
NAME         STATUS   ROLES                  AGE   VERSION
ubuntu2004   Ready    control-plane,master   11m   v1.21.1+k3s1

# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   helm-install-traefik-crd-8rjcf            0/1     Completed   2          13m
kube-system   helm-install-traefik-lwgcj                0/1     Completed   3          13m
kube-system   svclb-traefik-xtrcw                       2/2     Running     0          5m13s
kube-system   coredns-7448499f4d-6vrb7                  1/1     Running     5          13m
kube-system   traefik-97b44b794-q294l                   1/1     Running     0          5m14s
kube-system   local-path-provisioner-5ff76fc89d-pq5wb   1/1     Running     6          13m
kube-system   metrics-server-86cbb8457f-n4gsf           1/1     Running     6          13m

# kubectl get deployments -A
NAMESPACE     NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                  1/1     1            1           17m
kube-system   traefik                  1/1     1            1           8m50s
kube-system   local-path-provisioner   1/1     1            1           17m
kube-system   metrics-server           1/1     1            1           17m

Helm

Next thing is to install helm. Helm is a package manager for kubernetes, it will make easy to install applications.

curl -sL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

output

Downloading https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
helm version

version.BuildInfo{Version:"v3.6.0", GitCommit:"7f2df6467771a75f5646b7f12afb408590ed1755", GitTreeState:"clean", GoVersion:"go1.16.3"}

repo added

As a package manager, you can install k8s packages, named charts and you can find a lot of helm charts here https://artifacthub.io/. You can also add/install a single repo, I will explain this later.

# helm repo add nicholaswilde https://nicholaswilde.github.io/helm-charts/

"nicholaswilde" has been added to your repositories

# helm repo update
Hang tight while we grab the latest from your chart repositories...

Successfully got an update from the "nicholaswilde" chart repository
Update Complete. ⎈Happy Helming!⎈

hub Vs repo

basic difference between hub and repo is that hub is the official artifacthub. You can search charts there

helm search hub blog
URL                                                 CHART VERSION   APP VERSION DESCRIPTION
https://artifacthub.io/packages/helm/nicholaswi...  0.1.2           v1.3        Lightweight self-hosted facebook-styled PHP blog.
https://artifacthub.io/packages/helm/nicholaswi...  0.1.2           v2021.02    An ultra-lightweight blogging engine, written i...
https://artifacthub.io/packages/helm/bitnami/dr...  10.2.23         9.1.10      One of the most versatile open source content m...
https://artifacthub.io/packages/helm/bitnami/ghost  13.0.13         4.6.4       A simple, powerful publishing platform that all...
https://artifacthub.io/packages/helm/bitnami/jo...  10.1.10         3.9.27      PHP content management system (CMS) for publish...
https://artifacthub.io/packages/helm/nicholaswi...  0.1.1           0.1.1       A Self-Hosted, Twitter™-like Decentralised micr...
https://artifacthub.io/packages/helm/nicholaswi...  0.1.1           900b76a     A self-hosted well uh wiki engine or content ma...
https://artifacthub.io/packages/helm/bitnami/wo...  11.0.13         5.7.2       Web publishing platform for building blogs and ...

using a repo, means that you specify charts sources from single (or multiple) repos, usally outside of hub.

helm search repo blog
NAME                        CHART VERSION   APP VERSION DESCRIPTION
nicholaswilde/blog          0.1.2           v1.3        Lightweight self-hosted facebook-styled PHP blog.
nicholaswilde/chyrp-lite    0.1.2           v2021.02    An ultra-lightweight blogging engine, written i...
nicholaswilde/twtxt         0.1.1           0.1.1       A Self-Hosted, Twitter™-like Decentralised micr...
nicholaswilde/wiki          0.1.1           900b76a     A self-hosted well uh wiki engine or content ma...

Install a blog engine via helm

before we continue with the installation of our blog engine, we need to set the kube config via a shell variable

kube configuration yaml file

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

kubectl-k3s, already knows where to find this yaml configuration file. kubectl is a link to k3s in our setup

# whereis kubectl
kubectl: /usr/local/bin/kubectl

# ls -l /usr/local/bin/kubectl
lrwxrwxrwx 1 root root 3 Jun  4 23:20 /usr/local/bin/kubectl -> k3s

but not helm that we just installed.

After that we can install our blog engine.

helm install chyrp-lite              \
  --set env.TZ="Europe/Athens"  \
  nicholaswilde/chyrp-lite

output

NAME: chyrp-lite
LAST DEPLOYED: Fri Jun  4 23:46:04 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the application URL by running these commands:
  http://chyrp-lite.192.168.1.203.nip.io/

for the time being, ignore nip.io and verify the deployment

# kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
chyrp-lite   1/1     1            1           2m15s

# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
chyrp-lite-5c544b455f-d2pzm   1/1     Running   0          2m18s

Port Forwarding

as this is a pod running through k3s inside a virtual machine on our host operating system, in order to visit the blog and finish the installation we need to expose the port.

Let’s find out if there is a service running

kubectl get service chyrp-lite

output

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
chyrp-lite   ClusterIP   10.43.143.250   <none>        80/TCP    11h

okay we have a cluster ip.

you can also verify that our blog engine is running

curl -s 10.43.143.250/install.php | head

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

and then port forward the pod tcp port to our virtual machine

kubectl port-forward service/chyrp-lite 80

output

Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80

k3s issue with TCP Port 80

Port 80 used by build-in load balancer by default

That means service port 80 will become 10080 on the host, but 8080 will become 8080 without any offset.

So the above command will not work, it will give you an 404 error.
We can disable LoadBalancer (we do not need it for this demo) but it is easier to just forward the service port to 10080

kubectl port-forward service/chyrp-lite 10080:80
Forwarding from 127.0.0.1:10080 -> 80
Forwarding from [::1]:10080 -> 80
Handling connection for 10080
Handling connection for 10080

from our virtual machine we can verify

curl -s http://127.0.0.1:10080/install.php  | head

it will produce

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

ssh port forward

So now, we need to forward this TCP port from the virtual machine to our local machine. Using ssh, you should be able to do it like this from another terminal

ssh 192.168.122.42 -l ubuntu -L8080:127.0.0.1:10080

verify it

$ sudo ss -n -t -a 'sport = :10080'

State           Recv-Q          Send-Q                   Local Address:Port                    Peer Address:Port         Process
LISTEN          0               128                          127.0.0.1:10080                        0.0.0.0:*
LISTEN          0               128                              [::1]:10080                           [::]:*

$ curl -s http://localhost:10080/install.php | head

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

I am forwarding to a high tcp port (> 1024) so my user can open a tcp port, eitherwise I need to be root.

finishing the installation

To finish the installation of our blog engine, we need to visit the below url from our browser

http://localhost:10080/install.php

Database Setup

chyrplite01.jpg

Admin Setup

chyrplite02.jpg

Installation Completed

chyrplite03.jpg

First blog post

chyrplite04.jpg

that’s it !

Wednesday, 02 June 2021

PGPainless 0.2 Released!

I’m very proud and excited to announce the release of PGPainless version 0.2! Since the last stable release of my OpenPGP library for Java and Android 9 months ago, a lot has changed and improved! Most importantly development on PGPainless is being financially sponsored by FlowCrypt, so I was able to focus a lot more energy into working on the library. I’m very grateful for this opportunity 🙂

A red letter, sealed with a wax seal
Photo by Natasya Chen on Unsplash

PGPainless is using Bouncycastle, but aims to save developers from the pain of writing lots of boilerplate code, while at the same time using the BC API properly. The new release is available on maven central, the source code can be found on Github and Codeberg.

PGPainless is now depending on Bouncycastle 1.68 and the minimum Android API level has been raised to 10 (Android 2.3.3). Let me bring you up to speed about some of its features and the recent changes!

Inspect Keys!

Back in the last stable release, PGPainless could already be used to generate keys. It offered some shortcut methods to quickly generate archetypal keys, such as simple RSA keys, or key rings based on elliptic curves. In version 0.2, support for additional key algorithms was added, such as EdDSA or XDH.

The new release introduces PGPainless.inspectKeyRing(keyRing) which allows you to quickly access information about a key, such as its user-ids, which subkeys are encryption capable and which can be used to sign data, their expiration dates, algorithms etc.

Furthermore this feature can be used to evaluate a key at a certain point in time. That way you can quickly check, which key flags or algorithm preferences applied to the key 3 weeks ago, when that key was used to create that signature you care about. Or you can check, which user-ids your key had 5 years ago.

Edit Keys!

Do you already have a key, but want to extend its expiration date? Do you have a new Email address and need to add it as a user-id to your key? PGPainless.modifyKeyRing(keyRing) allows basic modification of a key. You can add additional user-ids, adopt subkeys into your key, or expire/revoke existing subkeys.

secretKeys = PGPainless.modifyKeyRing(secretKeys)
                       .setExpirationDate(expirationDate, keyRingProtector)
                       .addSubKey(subkey, subkeyProtector, keyRingProtector)
                       .addUserId(UserId.onlyEmail("alice@pgpainless.org"), keyRingProtector)
                       .deleteUserId("alice@pgpainful.org", keyRingProtector)
                       .revokeSubkey(subKeyId, keyRingProtector)
                       .revokeUserId("alice@pgpaintrain.org", keyRingProtector)
                       .changePassphraseFromOldPassphrase(oldPass)
                           .withSecureDefaultSettings().toNewPassphrase(newPass)
                       .done();

Encrypt and Sign Effortlessly!

PGPainless 0.2 comes with an improved, simplified encryption/signing API. While the old API was already quite intuitive, I was focusing too much on the code being “autocomplete-friendly”. My vision was that the user could encrypt a message without ever having to read a bit of documentation, simply by typing PGPainless and then following the autocomplete suggestions of the IDE. While the result was successful in that, the code was not very friendly to bind to real-world applications, as there was not one builder class, but several (one for each “step”). As a result, if a user wanted to configure the encryption dynamically, they would have to keep track of different builder objects and cope with casting madness.

// Old API
EncryptionStream encryptionStream = PGPainless.createEncryptor()
        .onOutputStream(targetOuputStream)
        .toRecipient(aliceKey)
        .and()
        .toRecipient(bobsKey)
        .and()
        .toPassphrase(Passphrase.fromPassword("password123"))
        .usingAlgorithms(SymmetricKeyAlgorithm.AES_192, HashAlgorithm.SHA256, CompressionAlgorithm.UNCOMPRESSED)
        .signWith(secretKeyDecryptor, aliceSecKey)
        .asciiArmor();

Streams.pipeAll(plaintextInputStream, encryptionStream);
encryptionStream.close();

The new API is still intuitive, but at the same time it is flexible enough to be modified with future features. Furthermore, since the builder has been divided it is now easier to integrate PGPainless dynamically.

// New shiny 0.2 API
EncryptionStream encryptionStream = PGPainless.encryptAndOrSign()
        .onOutputStream(outputStream)
        .withOptions(
                ProducerOptions.signAndEncrypt(
                        new EncryptionOptions()
                                .addRecipient(aliceKey)
                                .addRecipient(bobsKey)
                                // optionally encrypt to a passphrase
                                .addPassphrase(Passphrase.fromPassword("password123"))
                                // optionally override symmetric encryption algorithm
                                .overrideEncryptionAlgorithm(SymmetricKeyAlgorithm.AES_192),
                        new SigningOptions()
                                // Sign in-line (using one-pass-signature packet)
                                .addInlineSignature(secretKeyDecryptor, aliceSecKey, signatureType)
                                // Sign using a detached signature
                                .addDetachedSignature(secretKeyDecryptor, aliceSecKey, signatureType)
                                // optionally override hash algorithm
                                .overrideHashAlgorithm(HashAlgorithm.SHA256)
                ).setAsciiArmor(true) // Ascii armor
        );

Streams.pipeAll(plaintextInputStream, encryptionStream);
encryptionStream.close();

Verify Signatures Properly!

The biggest improvement to PGPainless 0.2 is improved, proper signature verification. Prior to this release, PGPainless was doing what probably every other Bouncycastle-based OpenPGP library was doing:

PGPSignature signature = [...];
// Initialize the signature with the public signing key
signature.init(pgpContentVerifierBuilderProvider, signingKey);

// Update the signature with the signed data
int read;
while ((read = signedDataInputStream.read()) != -1) {
    signature.update((byte) read);
}

// Verify signature correctness
boolean signatureIsValid = signature.verify();

The point is that the code above only verifies signature correctness (that the signing key really made the signature and that the signed data is intact). It does however not check if the signature is valid.

Signature validation goes far beyond plain signature correctness and entails a whole suite of checks that need to be performed. Is the signing key expired? Was it revoked? If it is a subkey, is it bound to its primary key correctly? Has the primary key expired or revoked? Does the signature contain unknown critical subpackets? Is it using acceptable algorithms? Does the signing key carry the SIGN_DATA flag? You can read more about why signature verification is hard in my previous blog post.

After implementing all those checks in PGPainless, the library now scores second place on Sequoia’s OpenPGP Interoperability Test Suite!

Lastly, support for verification of cleartext-signed messages such as emails was added.

New SOP module!

Also included in the new release is a shiny new module: pgpainless-sop

This module is an implementation of the Stateless OpenPGP Command Line Interface specification. It basically allows you to use PGPainless as a command line application to generate keys, encrypt/decrypt, sign and verify messages etc.

$ # Generate secret key
$ java -jar pgpainless-sop-0.2.0.jar generate-key "Alice <alice@pgpainless.org>" > alice.sec
$ # Extract public key
$ java -jar pgpainless-sop-0.2.0.jar extract-cert < alice.sec > alice.pub
$ # Sign some data
$ java -jar pgpainless-sop-0.2.0.jar sign --armor alice.sec < message.txt > message.txt.asc
$ # Verify signature
$ java -jar pgpainless-sop-0.2.0.jar verify message.txt.asc alice.pub < message.txt 
$ # Encrypt some data
$ java -jar pgpainless-sop-0.2.0.jar encrypt --sign-with alice.sec alice.pub < message.txt > message.txt.asc
$ # Decrypt ciphertext
$ java -jar pgpainless-sop-0.2.0.jar decrypt --verify-with alice.pub --verify-out=verif.txt alice.sec < message.txt.asc > message.txt

The primary reason for creating this module though was that it enables PGPainless to be plugged into the interoperability test suite mentioned above. This test suite uncovered a ton of bugs and shortcomings and helped me massively to understand and interpret the OpenPGP specification. I can only urge other developers who work on OpenPGP libraries to implement the SOP specification!

Upstreamed changes

Even if you are not yet convinced to switch to PGPainless and want to keep using vanilla Bouncycastle, you might still benefit from some bugfixes that were upstreamed to Bouncycastle.

Every now and then for example, BC would fail to do some internal conversions of elliptic curve encryption keys. The source of this issue was that BC was converting keys from BigIntegers to byte arrays, which could be of invalid length when the encoding was having leading zeros, thus omitting one byte. Fixing this was easy, but finding the bug was taking quite some time.

Another bug caused decryption of messages which were encrypted for more than one key/passphrase to fail, when BC tried to decrypt a Symmetrically Encrypted Session Key Packet with the wrong key/passphrase first. The cause of this issue was that BC was not properly rewinding the decryption stream after reading a checksum, thus corrupting decryption for subsequent attempts with the correct passphrase or key. The fix was to mark and rewind the stream properly before the next decryption attempt.

Lastly some methods in BC have been modernized by adding generics to Iterators. Chances are if you are using BC 1.68, you might recognize some changes once you bump the dependency to BC 1.69 (once it is released of course).

Thank you!

I would like to thank anyone who contributed to the new release in any way or form for their support. Special thanks goes to my sponsor FlowCrypt for giving me the opportunity to spend so much time on the library! Furthermore I’d like to thank the all the amazing folks over at Sequoia-PGP for their efforts of improving the OpenPGP ecosystem and patiently helping me understand the (at times at bit muddy) OpenPGP specification.

Monday, 31 May 2021

How i ended up fixing a "not a bug" in Qt Quick that made apostrophes not being rendered while reviewing an Okular patch

But in Okular we don't use Qt Quick you'll say!


Well, we actually use Qt Quick in the mobile interface of Okular, but you're right, this was not a patch for the mobile version, so "But in Okular we don't use Qt Quick!"

 

The story goes like this.

I was reviewing https://invent.kde.org/graphics/okular/-/merge_requests/431 and was not super convinced of the look and feel of it, so Nate gave some examples and one example was "the sidebar in Elisa".

 

So I started Elisa and saw this

You probably don't speak Catalan so don't see that lesquerra is a mistake it should be l'esquerra

 

Oh the translators made a mistake i thought, so i went to the .po file https://websvn.kde.org/trunk/l10n-kf5/ca/messages/elisa/elisa.po?view=markup#l638 and oh surprise! the translators had not made a mistake (well not really a surprise since they're really good).


Ok, so what's wrong then?


I had a look at the .qml code https://invent.kde.org/multimedia/elisa/-/blob/release/21.04/src/qml/MediaPlayListView.qml#L333 and it really looked simple enough.

 

I had never seen a Kirigami.PlaceholderMessage before but looking at it, well the text ended up in a Kirigami.Heading and then in a QtQuick Label. Nothing really special.


Weeeeeeird.


Ok i said, let's replace the the whole xi18nc in there with the translated text and run elisa. And then i could see the ' properly.


Hmmmm, ok then the problem had to be xi18nc right? Well it was and it was not.


The problem is that xi18nc translates ' to &apos; to be more "html compliant" https://invent.kde.org/frameworks/ki18n/-/blob/master/src/kuitmarkup.cpp#L40

 

And unfortunately Qt Declarative 5.15 Text item default "html support" is something Qt calls "Styled Text" that only supports five HTML entities, and apos is not one of them https://code.qt.io/cgit/qt/qtdeclarative.git/tree/src/quick/util/qquickstyledtext.cpp?h=5.15.2#n558

 

So where is the bug?

 

The Text documentation clearly states that &apos; is in the list of supported entities.

One could say the bug is that every time we use xi18nc in a Text we should remember to change the "html support" to something Qt calls "RichText" that actually supports &apos;.


But we all know that's never going to happen :D, and in this case it would actually be hard since Kirigami.PlaceholderMessage doesn't even expose the format property of the inner text

So I convinced our friends at Qt that only supporting five entities is not great and now we support six ^_^ in Qt 6.2, 6.1 and 5.15 (if you use the KDE Qt patch collection) https://invent.kde.org/qt/qt/qtdeclarative/-/merge_requests/3/diffs


P.S: I have a patch in the works to support all HTML entities in StyledText but somehow it's breaking the tests, so well, need to work more on it :)

Tuesday, 25 May 2021

When it takes a pandemic ...

to understand the speed of innovation.

2020 was a special year for all of us - with "us" here meaning the entire world: Faced with a truly urgent global problem that year was a learning opportunity for everyone.

For me personally the year started like any other year - except that news coming out of China were troubling. Little did I know how fast those news would reach the rest of the world - little did I know the impact that this would have.

I started the year with FOSDEM in Brussels in February - like every other year, except it felt decidedly different going to this event with thousands of attendees, crammed into overfull university rooms.

Not a month later, travel budgets in many corporations had been frozen. The last in person event that I went to was FOSS Backstage - incapable of imagining just for how long this would be the last in person event I would go to. To this date I'm grateful for Bertrand for teaching the organising team just how much can be transported with video calls - and I'm still grateful for the technicians onsite that made speaker-attendee interaction seamless - across several hundred miles.

One talk from FOSS Backstage that I went back to over and over during 2020 and 2021 was the one given by Emmy Tsang on Open Science.

Referencing the then new pandemic she made a very impressive case for open science, for open collaboration - and for the speed of innovation that comes from that open collaboration. More than anything else I had heard or read before it made it clear to me what everyone means by explaining how Open Source (and by extension internally InnerSource) increases the speed of innovation substantially:

Instead of having everyone start from scratch, instead of wasting time and time again to build the basic foundation - instead we can all collaborate on that technological foundation and focus on key business differentiators. Or as Danese Cooper would put it: "All the boats must rise." The one thing that I found most amazing during this pandemic were moments during which we saw scientists from all sorts of disciplines work together - and doing so in a very open and accessible way. Among all the chaos and misinformation voices for reliable and dependable information emerged. We saw practitioners add value to the discussion.

With information shared in pre-print format, groups could move much faster than the usual one year innovation cycle. Yes, it meant more trash would make it as well. And still we wouldn't be where we are today if humans across the globe, no matter their nationality or background would have had a chance to collaborate and move faster as a result.

Somehow that's at a very large scale the same effect seen in other projects:

  • RoboCup only moved as fast as it did by opening up the solution of winning teams each year. As a result new teams would get a head start with designs and programs readily available. Instead of starting from scratch, they can stand on the shoulders of giants.
  • Open source helps achieve the same on a daily basis. There's a very visible sign for that: Perseverance on Mars is running on open source software. Every GitHub user, who in their life has contributed to software running on Perseverance today has a badge on their GitHub profile - there are countless badges serving as proof just how many hands it took, how much collaboration was necessary to make this project work.


For me one important learning during this pandemic was just how much we can achieve by working together, be collaborating and building bridges. In that sense, what we have seen is how it is possible to gain so much more by sharing - as Niels Basjes put it so nicely when explaining the Apache Way in one sentence: Gaining by Sharing.

In a sense this is what brought me to the InnerSource Commons Foundation - it's a way for all of us to experience the strenght of collaboration. It's a first step towards bringing more people and more businesses to the open source world, joining forces to solve issues ahead of us.

Monday, 24 May 2021

Sharing your loan details to anyone

A week ago, I blogged about a vulnerability in a platform that would allow anyone to download users’ amortisation schedules. This was a critical issue, but it wasn’t really exploitable in the wild as it included a part where you had to guess the name of the document to download.

I no longer trust that platform so I went to their website to remove my loan data from it, but apparently this isn’t possibile via the UI.

I also opened a ticket on their support platform to request removal and they replied that it isn’t possible.

So I went to their website with the intention of replacing the data with a fake one… but there was no longer an edit button!

Loans

I’m sure it was there before and in fact the code also confirms that it was there:

Loans code

However, the platform is based on Magento and so, starting from the current URL, we can easily guess the edit URL, e.g. https://<host>/anagraficamutui/mutuo/edit/id/<n>.

Let’s try 1… bingo!

But wait a minute… this isn’t my loan! Luckily it’s just a demo entry put in by some developer:

Someone else loan

Even though it’s a dummy page, we can already see the details of the loan such as the (hopefully) fake IBAN, or the loan total and loan number and even the bank contact person name and email address.

And now take a look at this: if I try to access that page in private mode, then I get the login page. All (almost) well, right?

Nope. Let’s try the same request via curl:

$ curl -s https://<host>/anagraficamutui/edit/id/1 | grep banca

<input type="text" name="istituto_credito" id="istituto_credito" value="banca acme" title="Nome istituto" class="input-text istituto_credito required-entry" />

$ curl -s https://<host>/anagraficamutui/edit/id/1 | grep NL75

<input type="text" name="iban" id="iban" value="NL75xxxxxxxxx" title="Iban" class="input-text iban required-entry validate-iban validate-length maximum-length-27 validate-alphanum" />

Wait a minute, what’s going on?

Well, it turns out that the page sets the location header to redirect you to the login page when there’s no cookie, otherwise it prints the HTML page!

$ curl -s https://<host>/anagraficamutui/edit/id/1 -I | grep location

location: https://<host>/customer/account/login/

Oh-no!

Conclusion

Data from 5723 loans could have been exposed by accessing a specific URL. Details such as IBAN, loan number, loan total and the bank account contact person could have been used to perform spear phishing attacks.

I reported this privacy flaw to the CSIRT Italia and the platform’s DPO. The issue has been solved after 2 days, but I still haven’t heard from them.

Wednesday, 19 May 2021

Sharing your amortisation schedule to anyone

Last month, my company allowed me to claim some benefits through a dedicated platform. This platform is specifically built for this purpose and allows you to recover these benefits not only in the form of coupons or discount codes, but also as reimbursements for medical visits or interest on mortgage payments.

I wanted to try the latter.

I logged on to the platform and then I filled in all the (many) details about the loan that the plaform asks you to fill in, until I had to upload my amortisation schedule which contains a lot of sensitive data. In fact, a strange thing happened at this step: my file was named document.pdf, but after uploading it was renamed to document_2.pdf.

How do I know? Well, let’s have a look to the UI:

Loan details

Loan details hover

It clearly shows the file name and that’s also a hyperlink. Let’s click then.

The PDF opens in my browser. This is expected, but what happens if we take the URL and try to open it in a private window?? Guess what?

You guessed it.

Let’s have a look to the URL again. It’s in the form: https://<host>/media/mutuo/file/d/o/document_2.pdf.

That’s tempting, isn’t?

I wanted to have some fun and I tried the following:

Loan download

Both the curl output and the checksums are enough to understand that some document has been downloaded there (but discarded since I didn’t download them to my disk…).

Thus, since the d and o parent folders match the two initial letters of my file, I successfully tried with stuff like:

  • /c/o/contratto.pdf, /c/o/contratto_2.pdf, …
  • /c/o/contract.pdf, …
  • /p/r/prospetto.pdf, …

and it does also work with numbers too (to find this out I had to upload a file named 1.pdf 😇), e.g. https://<host>/media/mutuo/file/1/_/1_10.pdf.

Conclusion

If you have uploaded your amortisation schedule to this platform, that in its website says it has more than 300k users from 3k different companies, well someone may have downloaded it.

I reported this privacy flaw to the CSIRT Italia via a PGP encrypted email; the CSIRT is supposed to write to the company that owns the platform to alert them to the problem, but a week later I still hadn’t heard from either of them. So after a week I pinged the CSIRT again, and they replied with a plain text email telling me that they had opened an internal ticket and were nice enough to embed my initial PGP encrypted email.

Two weeks later (about 21 days since my first mail) the platform fixed the problem (the uploaded file path isn’t deterministic anymore and authentication is in place), but I still haven’t heard from them.

Addendum

Since <host> is a third-level domain in my case, I used stuff like Sublist3r and Amass, but you can also use the online version hosted on nmmapper.com, to perform DNS enumeration and I found ~50 websites, 30 of which are aliases pointing to the same host. In fact, I could replace <host> with each of them and I would always download my document_2.pdf file.

Sunday, 16 May 2021

Artificial Intelligence safety: embracing checklists

Unfortunately, human errors are bound to happen. Checklists allows one to verify that all the required actions are correctly done, and in the correct order. The military has it, the health care sector has it, professional diving has it, the aviation and space industries have it, software engineering has it. Why not artificial intelligence practitioners?

In October 1935, the two pilots of the new Boeing warplane B-17 were killed in the crash of the aircraft. The crash was caused by an oversight of the pilots, who forgot to release a lock during the takeoff procedure. Since then, following the checklist during flight operations is mandatory and reduced the number of accidents. During the Apollo 13 mission of 1970, carefully written checklists mitigated the oxygen tank explosion accident.

In healthcare, checklists are widespread too. For example, the World Health Organization released a checklist outlining the required steps before, during, and after a surgery. A meta-analysis suggested that using the checklist was associated with mortality and complication rates reduction.

Because artificial intelligence is used for increasingly important matters, accidents can have important consequences. During a test, a chatbot suggested harmful behaviors to fake patients. The data scientists explained that the AI had no scientific or medical expertise. AI in law enforcement can also cause serious trouble. For example, a facial recognition software mistakenly identified a criminal, resulting in the arrest of an innocent, an algorithm used to determine the likelihood of crime recidivism was judged unfair towards black defendants. AI is also used in healthcare where a simulation of the Covid-19 outbreak in the United Kingdom shaped policy and led to a nation-wide lockdown. However, the AI simulation was badly programmed, causing serious issues. Root cause analysis determined that the simulation was not deterministic and badly tested. The lack of checklist could have played a role.

Just like in the aforementioned complex and critical industries, checklists should be leveraged to make sure the building and reporting of AI models includes everything required to reproduce results and make an informed judgement, which fosters trust in the AI accuracy. However, checklists for building prediction models like TRIPOD are not often used by data scientists, even though they might help. Possible reasons might be ignorance about the existence of such checklists, perceived lack of usefulness or misunderstandings caused by the use of different vocabularies among AI developers.

Enforcing the use of standardized checklists would lead to better idioms and practices, thereby fostering fair and accurate AI with a robust evaluation, making its adoption easier for sensitive tasks. In particular, a checklist on AI should include points about how the training dataset was constructed and preprocessed, the model specification and architecture and how its efficiency, accuracy and fairness were assessed. A list of all intended purposes of the AI should also be disclosed, as well as known risks and limitations.

As a novel field, one can understand why checklists are not widely used for AI. However, they are used in other fields for known reasons, and taking notes from past mistakes and ideas would be great this time.

Saturday, 01 May 2021

systemd in WSLv2

I am using archlinux in my WSL for the last two (2) years and the whole experience is quite smooth. I wanted to test native docker will run within WSL and not with the windows docker/container service, so I installed docker. My main purpose is building packages so (for now) I do not need networking/routes or anything else.

WSL

ebal@myworklaptop:~$ uname -a
Linux myworklaptop 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 x86_64 GNU/Linux

ebal@myworklaptop:~$ cat /etc/os-release
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://www.archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
LOGO=archlinux

Docker Install

$ sudo pacman -S docker

$ sudo pacman -Q docker
docker 1:20.10.6-1

$ sudo dockerd -v
Docker version 20.10.6, build 8728dd246c

Run docker

sudo dockerd -D

-D is for debug

and now pull an alpine image

ebal@myworklaptop:~$ docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

ebal@myworklaptop:~$ docker pull alpine:latest
latest: Pulling from library/alpine
540db60ca938: Pull complete
Digest: sha256:69e70a79f2d41ab5d637de98c1e0b055206ba40a8145e7bddb55ccc04e13cf8f
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest

ebal@myworklaptop:~$ docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
alpine       latest    6dbb9cc54074   2 weeks ago   5.61MB

Test alpine image

docker run -ti alpine:latest ash

perform a simple update

# apk update

fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.5-71-gfcabe3349a [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.5-65-g28e7396caa [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]

OK: 13887 distinct packages available

okay, seems that it is working.

Genie Systemd

as many already know, we can not run systemd inside WSL, at least not by default. So here comes genie !

A quick way into a systemd “bottle” for WSL

WSLv2 ONLY.

wsl.exe -l -v

  NAME            STATE           VERSION
* Archlinux       Running         2
  Ubuntu-20.04    Stopped         1

It will work on my arch.

Install Genie

genie comes by default with an archlinux artifact from github

curl -sLO https://github.com/arkane-systems/genie/releases/download/v1.40/genie-systemd-1.40-1-x86_64.pkg.tar.zst

sudo pacman -U genie-systemd-1.40-1-x86_64.pkg.tar.zst
$ pacman -Q genie-systemd
genie-systemd 1.40-1

daemonize

Genie has a dependency of daemonize.

In Archlinux, you can find the latest version of daemonize here:

https://gitlab.com/archlinux_build/daemonize

$ sudo pacman -U daemonize-1.7.8-1-x86_64.pkg.tar.zst

loading packages...
resolving dependencies...
looking for conflicting packages...

Package (1)  New Version  Net Change
daemonize    1.7.8-1        0.03 MiB
Total Installed Size:  0.03 MiB

:: Proceed with installation? [Y/n] y
$ pacman -Q daemonize
daemonize 1.7.8-1

Is genie running ?

We can start as-root genie with a new shell:

# genie --version
1.40

# genie -s
Waiting for systemd....!

# genie -r
running

okay !

Windows Terminal

In order to use systemd-genie by default, we need to run our WSL Archlinux with an initial command.

I use aka.ms/terminal to start/open my WSLv2 Archlinux so I had to edit the “Command Line” Option to this:

wsl.exe -d Archlinux genie -s

You can also verify that your WSL distro is down, with this

wsl.exe --shutdown

then fire up a new WSL distro!

wslv2-systemd.png

Systemd Service

you can enable & start docker service unit,

$ sudo systemctl enable docker

$ sudo systemctl start  docker

so next time will auto-start:

$ docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
alpine       latest    6dbb9cc54074   2 weeks ago   5.61MB

systemd

$ ps -e fuwwww | grep -i systemd
root           1  0.1  0.2  21096 10748 ?        Ss   14:27   0:00 systemd
root          29  0.0  0.2  30472 11168 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-journald
root          38  0.0  0.1  25672  7684 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-udevd
dbus          63  0.0  0.1  12052  5736 ?        Ss   14:27   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root          65  0.0  0.1  14600  7224 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-logind
root         211  0.0  0.1  14176  6872 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd-machined
ebal         312  0.0  0.0   3164   808 pts/1    S+   14:30   0:00  _ grep -i systemd
ebal         215  0.0  0.2  16036  8956 ?        Ss   14:27   0:00 /usr/lib/systemd/systemd --user

$ systemctl status docker
* docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-05-01 14:27:12 EEST; 3min 22s ago
TriggeredBy: * docker.socket
       Docs: https://docs.docker.com
   Main PID: 64 (dockerd)
      Tasks: 17 (limit: 4715)
     Memory: 167.9M
     CGroup: /system.slice/docker.service
             |-64 /usr/bin/dockerd -H fd://
             `-80 containerd --config /var/run/docker/containerd/containerd.toml --log-level info

May 01 14:27:12 myworklaptop-wsl systemd[1]: Started Docker Application Container Engine.
May 01 14:27:12 myworklaptop-wsl dockerd[64]: time="2021-05-01T14:27:12.303580300+03:00" level=info msg="AP>

that’s it !

Sunday, 18 April 2021

KDE Gear 21.04 is coming this week! But what is KDE Gear?

Let's dig a bit in our history.


In the "good old days" (TM) there was KDE, life was simple, everything we did was KDE and everything we released was KDE [*]

 

Then at some point we realized we wanted to release some stuff with different frequency, so KDE Extragear[**] was born.


Then we said "KDE is the community" so we couldn't release KDE anymore, thus we said "ok, the thing we release with all the stuff that releases at the same time will be KDE Software Compilation", which i think we all agree it was not an awesome name, but "names are hard" (TM) (this whole blog is about that :D)


We went on like that for a while, but then we realized we wanted different schedules for the things that were inside the KDE Software Compilation. 

 

We thought it made sense for the core libraries to be released monthly and the Plasma team also wanted to have it's own release schedule (has been tweaked over the years).


That meant that "KDE Frameworks" and "Plasma" (of KDE Plasma) names as things we release were born (Plasma was already a name used before, so that one was easy). The problem was that we had to find a name for "KDE Software Compilation" minus "KDE Frameworks" minus "Plasma".


One option would have been to keep calling it "KDE Software Compilation", but we thought it would be confusing to keep the name but make it contain lots of less things so we used the un-imaginative name (which as far as i remember i proposed) "KDE Applications"


And we released "KDE Applications" for a long time, but you know what, "KDE Applications" is not a good name either. First reason "KDE Applications" was not only applications, it also contained libraries, but that's ok, no one would have really cared if that was the only problem. The important issue was that if you call something "KDE Applications" you make it seem like these are all the applications KDE releases, but no, that's not the truth, remember our old friend KDE Extragear independently released applications?


So we sat down in the Akademy 2019 in Milan and tried to find a better name. And we couldn't. So we all said let's go with the "there's no spoon" route, you don't need a name if you don't have a thing. We basically de-branded the whole thing. The logic was that after all it's just a bunch of applications that are released at the same time because it makes things super easy from a release engineering point of view, but Okular doesn't "have anything to do" with Dolphin nor with krdc nor with kpat, they just happen to be released at the good time.


So we kept the release engineering side under the boring and non-capitalized name of "release service" and we patted ourselves on the back for having solved a decade long problem.


Narrator voice: "they didn't solve the problem"


After a few releases it became clear that our promotion people were having some trouble writing announcements, because "Dolphin, Okular, Krdc, kpat .. 100 app names.. is released" doesn't really sell very well.


Since promotion is important we sat down again and did some more thinking, ok we need a name, but it can't be a name that is "too specific" about applications because otherwise it will have the problem of "KDE Applications". So it had to be a bit generic, at some point, i jokingly suggested "KDE Gear", tied with our logo and with our old friend that would we have almost killed by now "KDE Extragear"


Narrator voice: "they did not realize it was a joke"

 

And people liked "KDE Gear", so yeah, this week we're releasing KDE Gear 21.04 whose heritage can be traced to "release service 21.04", "KDE Applications 21.04", "KDE Software Compilation 21.04" and "KDE 21.04" [***]


P.S: Lots of these decisions happened long time ago, so my recollection, specially my involvement in the suggestion of the names, may not be as accurate as i think it is.


[*] May not be an accurate depiction, I wasn't around in the "good old days"

[**] A term we've been killing over the last years, because the term "extra" implied to some degree this were not important things, and they are totally important, the only difference is that they are released on their own, so personally i try to use something like "independently released"

[***] it'd be great if you could stop calling the things we release as "KDE", we haven't used that name for  releases of code for more than a decade now

Linux bluetooth HeadSet Audio HSP/HFP WH-1000XM3

I am an archlinux user using Sony WH-1000XM3 bluetooth noise-cancellation headphones. I am also using pulseaudio and it took me a while to switch the bluetooth headphones to HSP/HFP profile so the microphone can work too. Switching the bluetooth profile of your headphones to HeadSet Audio works but it is only monophonic audio and without noise-cancellation and I had to switch to piperwire also. But at least now the microphone works!

I was wondering how distros that by default have already switched to pipewire deal with this situation. So I started a fedora 34 (beta) edition and attached both my bluetooth adapter TP-LINK UB400 v1 and my web camera Logitech HD Webcam C270.

The test should be to open a jitsi meet and a zoom test meeting and verify that my headphones can work without me doing any stranger CLI magic.

tldr; works out of the box !

lsusb

[root@fedora ~]# lsusb 

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 004: ID 046d:0825 Logitech, Inc. Webcam C270
Bus 001 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode)
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

as you can see both usb devices have properly attached to fedora34

kernel

we need Linux kernel > 5.10.x to have a proper support

[root@fedora ~]# uname -a
Linux fedora 5.11.10-300.fc34.x86_64 #1 SMP Thu Mar 25 14:03:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux 

pipewire

and of-course piperwire installed


[root@fedora ~]# rpm -qa | grep -Ei 'blue|pipe|pulse' 

libpipeline-1.5.3-2.fc34.x86_64
pulseaudio-libs-14.2-3.fc34.x86_64
pulseaudio-libs-glib2-14.2-3.fc34.x86_64
pipewire0.2-libs-0.2.7-5.fc34.x86_64
bluez-libs-5.56-4.fc34.x86_64
pipewire-libs-0.3.24-4.fc34.x86_64
pipewire-0.3.24-4.fc34.x86_64
bluez-5.56-4.fc34.x86_64
bluez-obexd-5.56-4.fc34.x86_64
pipewire-gstreamer-0.3.24-4.fc34.x86_64
pipewire-pulseaudio-0.3.24-4.fc34.x86_64
gnome-bluetooth-libs-3.34.5-1.fc34.x86_64
gnome-bluetooth-3.34.5-1.fc34.x86_64
bluez-cups-5.56-4.fc34.x86_64
NetworkManager-bluetooth-1.30.2-1.fc34.x86_64
pipewire-alsa-0.3.24-4.fc34.x86_64
pipewire-jack-audio-connection-kit-0.3.24-4.fc34.x86_64
pipewire-utils-0.3.24-4.fc34.x86_64

screenshots

01f3420210418

02f3420210418

03f3420210418

Bluetooth Profiles

04f3420210418

Online Meetings

05f3420210418

06f3420210418

Monday, 12 April 2021

Submit your talks now for Akademy 2021!

As you can see in https://akademy.kde.org/2021/cfp the Call for Participation for Akademy 2021 (that will take place online from Friday the 18th to Friday the 25th of June) is already open.

 

You have until Sunday the 2nd of May 2021 23:59 UTC to submit your proposals but you will make our (talks committee) live much easier if you start sending the proposals *now* and not send them all last minute in 2 weeks ;)


I promise i'll buy you a $preferred_beverage$ for next years Akademy (which we're hoping will happen live) if you send a talk before end of this week (and you send me a mail about it)


Saturday, 03 April 2021

Why Signature Verification in OpenPGP is hard

An Enigma cipher machine which is probably easier to understand than OpenPGP
An Enigma cipher machine which is less secure but also easier to understand than OpenPGP.
Photo by Mauro Sbicego on Unsplash.

When I first thought about signature verification in OpenPGP I thought “well, it cannot be that hard, right?”. In the end all you got to do is check if a signature was made by the given key and if that signature checks out (is cryptographically correct). Oh boy, was I wrong.

The first major realization that struck me was that there are more than just two factors involved in the signature verification process. While the first two are pretty obvious – the signature itself and the key that created it – another major factor is played by the point in time at which a signature is being verified. OpenPGP keys are changing over the course of their lifespan. Subkeys may expire or be revoked for different reasons. A subkey might be eligible to create valid signatures until its binding signature expires. From that point in time all new signatures created with that key must be considered invalid. Keys can be rebound, so an expired key might become valid again at some point, which would also make (new) signatures created with it valid once again.

But what does it mean to rebind a key? How are expiration dates set on keys and what role plays the reason of a revocation?

The answer to the first two questions is – Signatures!

OpenPGP keys consist of a set of keys and subkeys, user-id packets (which contain email addresses and names and so forth) and lastly a bunch of signatures which tie all this information together. The root of this bunch is the primary key – a key with the ability to create signatures, or rather certifications. Signatures and certifications are basically the same, they just have a different semantic meaning, which is quite an important detail. More to that later.

The main use of the primary key is to bind additional data to it. First and foremost user-id packets, which can (along with the primary keys key-ID) be used to identify the key. Alice might for example have a user-id packet on her key which contains her name and email address. Keys can have more than one user-id, so Alice might also have an additional user-id packet with her work-email or her chat address added to the key.

But simply adding the packet to the key is not enough. An attacker might simply take her key, change the email address and hand the modified key to Bob, right? Wrong. Signatures Certifications to the rescue!

   Primary-Key
      [Revocation Self Signature]
      [Direct Key Signature...]
      [User ID [Signature ...] ...]
      [User Attribute [Signature ...] ...]
      [[Subkey [Binding-Signature-Revocation]
              Subkey-Binding-Signature ...] ...]

Information is not just loosely appended to the key. Instead it is cryptographically bound to it by the help of a certification. Certifications are signatures which can only be created by a key which is allowed to create certifications. If you take a look at any OpenPGP (v4) key, you will see that most likely every single primary key will be able to create certifications. So basically the primary key is used to certify that a piece of information belongs to the key.

The same goes for subkeys. They are also bound to the primary key with the help of a certification. Here, the certification has a special type and is called “subkey binding signature”. The concept though is mostly the same. The primary key certifies that a subkey belongs to it by help of a signature.

Now it slowly becomes complicated. As you can see, up to this point the binding relations have been uni-directional. The primary key claims to be the dominant part of the relation. This might however introduce the risk of an attacker using a primary key to claim ownership of a subkey which was used to make a signature over some data. It would then appear as if the attacker is also the owner of that signature. That’s the reason why a signing-capable subkey must somehow prove that it belongs to its primary key. Again, signatures to the rescue! A subkey binding signature that binds a signing capable subkey MUST contain a primary key binding signature made by the subkey over the primary key. Now the relationship is bidirectional and attacks such as the one mentioned above are mitigated.

So, about certifications – when is a key allowed to create certifications? How can we specify what capabilities a key has?

The answer are Signature… Subpackets!
Those are some pieces of information that are added into a signature that give it more semantic meaning aside from the signature type. Examples for signature subpackets are key flags, signature/key creation/expiration times, preferred algorithms and many more. Those subpackets can reside in two areas of the signature. The unhashed area is not covered by the signature itself, so here packets can be added/removed without breaking the signature. The hashed area on the other hand gets its name from the fact that subpackets placed here are taken into consideration when the signature is being calculated. They cannot be modified without invalidating the signature.

So the unhashed area shall only contain advisory information or subpackets which are “self-authenticating” (meaning information which is validated as a side-effect of validating the signature). An example of a self-authenticating subpacket would be the issuers-id packet, which contains the key-id of the key that created the signature. This piece of information can be verified by checking if the denominated key really created the signature. There is no need to cover this information by the signature itself.

Another really important subpacket type is the key flags packet. It contains a bit-mask that declares what purpose a key can be used for, or rather what purpose the key is ALLOWED to be used for. Such purposes are encryption of data at rest, encryption of data in transit, signing data, certifying data, authentication. Additionally there are key flags indicating that a key has been split by a key-splitting mechanism or that a key is being shared by more than one entity.

Each signature MUST contain a signature creation time subpacket, which states at which data and time a signature was created. Optionally a signature might contain a signature expiration time subpacket which denotes at which point in time a signature expires and becomes invalid. So far so good.

Now, those subpackets can also be placed on certifications, eg. subkey binding signatures. If a subkey binding signature contains a key expiration time subpacket, this indicates that the subkey expires at a certain point in time. An expired subkey must not be used anymore and signatures created by it after it has been expired must be considered invalid. It gets even more complicated if you consider, that a subkey binding signature might contain a key expiration time subpacket, along with a signature expiration time subpacket. That could lead to funny situations. For example a subkey might have two subkey binding signatures. One simply binds the key indefinitely, while the second one has an expiration time. Here the latest binding signature takes precedence, meaning the subkey might expire at lets say t+3, while at t+5 the signature itself expires, meaning that the key regains validity, as now the former binding signature is “active” again.

Not yet complicated enough? Consider this: Whether or not a key is eligible to create signatures is denoted by the key flags subpacket which again is placed in a signature. So when verifying a signature, you have to consult self-signatures on the signing key to see if it carries the sign-data key flag. Furthermore you have to validate that self-signature and check if it was created by a key carrying the certify-other key flag. Now again you have to check if that signature was created by a key carrying the certify-other key flag (given it is not the same (primary) key). Whew.

Lastly there are key revocations. If a key gets lost or stolen or is simply retired, it can be revoked. Now it depends on the revocation reason, what impact the revocation has on past and/or future signatures. If the key was revoked using a “soft” revocation reason (key has not been compromised), the revocation is mostly handled as if it were an expiration. Past signatures are still good, but the key must no longer be used anymore. If it however has a “hard” revocation reason (or no reason at all) this could mean that the key has been lost or compromised. This means that any signature (future and past) that was made by this key has now to be considered invalid, since an attacker might have forged it.

Now, a revocation can only be created by a certification capable key, so in order to check if a revocation is valid, we have to check if the revoking key is allowed to revoke this specific subkey. Permitted revocation keys are either the primary key, or an external key denoted in a revocation key subpacket on a self-signature. Can you see why this introduces complexity?

Revocation signatures have to be handled differently from other signatures, since if the primary key is revoked, is it eligible to create revocation signatures in the first place? What if an external revocation key has been revoked and is now used to revoke another key?

I believe the correct way to tackle signature validity is to first evaluate the key (primary and subkeys) at signature creation time. Evaluating the key at a given point in time tn means we reject all signatures made after tn (except hard revocations) as those are not yet valid. Furthermore we reject all signatures that are expired a tn as those are no longer valid. Furthermore we remove all signatures that are superseded by another more recent signature. We do this for all signatures on all keys in the “correct” order. What we are left with is a canonicalized key ring, which we can now use to verify the signature in question with.

So lets try to summarize every step that we have to take in order to verify a signatures validity.

  • First we have to check if the signature contains a creation time subpacket. If it does not, we can already reject it.
  • Next we check if the signature is expired by now. If it is, we can again reject.
  • Now we have to evaluate the key ring that contains the signatures signing key at the time at which the signature was created.
    • Is the signing key properly bound to the key ring?
      • Was is created before the signature?
      • Was it bound to the key ring before the signature was made?
      • Is the binding signature not expired?
      • Is the binding signature not revoked?
      • Is the subkey binding signature carrying a valid primary key binding signature?
      • Are the binding signatures using acceptable algorithms?
    • Is the subkey itself not expired?
    • Is the primary key not expired?
    • Is the primary key not revoked?
    • Is the subkey not revoked?
  • Is the signing key capable of creating signatures?
  • Was the signature created with acceptable algorithms? Reject weak algorithms like SHA-1.
  • Is the signature correct?

Lastly of course, the user has to decide if the signing key is trustworthy or not, but luckily we can leave this decision up to the user.

As you can see, this is not at all trivial and I’m sure I missed some steps and/or odd edge cases. What makes implementing this even harder is that the specification is deliberately sparse in places. What subpackets are allowed to be placed in the unhashed area? What MUST be placed in the hashed area instead? Furthermore the specification contains errors which make it even harder to get a good picture of what is allowed and what isn’t. I believe what OpenPGP needs is a document that acts as a guide to implementors. That guide needs to specify where and where not certain subpackets are to be expected, how a certain piece of semantic meaning can be represented and how signature verification is to be conducted. It is not desirable that each and every implementor has to digest the whole specification multiple times in order to understand what steps are necessary to verify a signature or to select a valid key for signature creation.

Did you implement signature verification in OpenPGP? What are your thoughts on this? Did you go through the same struggles that I do?

Lastly I want to give a shout-out to the devs of Sequoia-PGP, which have a pretty awesome test suite that covers lots and lots of edge-cases and interoperability concerns of implementations. I definitely recommend everyone who needs to work with OpenPGP to throw their implementation against the suite to see if there are any shortcomings and problems with it.

Thursday, 01 April 2021

Microphone settings - how to deactivate webcam microphone

As many people out, in the last months I have participated in more remote video conferences than most likely in my whole life before. In those months I improved the audio and video hardware, the lighting, learned new shortcuts for those tools, and in general tried to optimise a few parts.

One of the problems I encountered on this journey was the selection of the correct microphone. The webcam, which luckily I got already before the pandemic, has an integrated microphone. The sound quality is ok. But compared to the microphone in the new headset the sound quality is awful. The problem was that whenever I plugged-in the webcam, its microphone will be selected as the new default. So I had to manually change the sound setting every time I plug and unplug the webcam.

I asked about that problem on mastodon (automatically forwarded to this proprietary microblogging service). There were several suggestions how to fix that. In the end I decided to use PulseAudio Volume Control, as I thought that is also a solution other people around me can easily implement. There under the Configuration tab you can switch the Profile for the webcam to Off as seen in the screenshot.

Screenshot of apps in work profile

This way I can plug in and unplug the webcam without the need to always switch the audio microphone settings. Saves quite some time while having such a high amount of video calls every day.

Thanks a lot to all who provided suggestions how to fix this problem, and to Lennart Poettering for writing PulseAudio Volume Control and publishing it under a Free Software license. Hopefully in future, such settings about the default sound and video hardware, are included directly in the general settings of the desktop environment as well.

Wednesday, 31 March 2021

MotionPhoto / MicroVideo File Formats on Pixel Phones

  • Losca
  • 11:24, Wednesday, 31 March 2021

Google Pixel phones support what they call ”Motion Photo” which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.

I’d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.

Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as ”MVIMG_[datetime].jpg", and they have the following meta-data:

Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607

The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:

#!/bin/bash
#
# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" | grep MicroVideoOffset | sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

The newer format is recorded in filenames called ”PXL_[datetime].MP.jpg”, and they have a _lot_ of additional metadata:

Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0

Sounds like fun and lots of information. However I didn’t see why the “length” in first item is 0 and I didn’t see how to use the latter Length info. But I can use the mp4 headers to extract it:

#!/bin/bash
#
# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the ”Length” is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I’ll leave the above as is however for the ❤️ of binary grepping.

(cross-posted to my other blog)

What's in a name?

Often when people conceptualise transgender people, there is a misery inherent to our identity. There is the everyday discrimination, gender dysphoria, the arduous road of transition, and the sort of identity crisis that occurs when we let go of one name and choose another. And while being transgender certainly can be all of that, it’s a pity that the joyous aspects are often forgotten, or disappear behind all the negative clouds that more desperately require society’s attention. For this International Transgender Day of Visibility, I want to talk about those happy things. I want to talk about names, introspection, and the mutability of people.

Sometimes I mention to people that I have chosen my own name. In response, people often look at me as though I had just uttered an impossibility. What? Why would you? How would you do that? Can you even do that? I find this a little funny because it’s the most normal thing for me, but a never-even-thought-about-that for the vast majority of people. Such an unlikely thing, in fact, that some people consequently ask me for my real name. A self-selected name cannot be a real name, right?

But I want to make a case for changing one’s name, even if you’re not trans.

Scrooge

Imagine Scrooge from A Christmas Carol. In your mind’s eye, you will likely picture him as he was introduced to you: “a squeezing, wrenching, grasping, scraping, clutching, covetous, old sinner! Hard and sharp as flint, from which no steel had ever struck out generous fire; secret, and self-contained, and solitary as an oyster.”

But that Scrooge—the Scrooge which everyone knows—is the man from the beginning of the novel. By the end of the novel, Scrooge has changed into a compassionate man who wishes joy to humanity and donates his money to the poor. And even though everyone knows the story, almost no one thinks “good” when they hear the name Scrooge.

I don’t really know why people can’t let go of the image of Scrooge. Maybe his awfulness was so strong that one can’t just forget about it. Maybe we haven’t spent enough time with the new Scrooge yet. Maybe his name simply sounds essentially hideous, and whatever he did to deserve his reputation doesn’t even matter.

Or maybe his name has become akin to a meme for stingy people, similar to today’s “Chad” for handsome, popular, probably not-so-smart men, or “Karen” for vile, selfish, bourgeois women (although that meme quickly devolved into a sexist term meaning “any woman I don’t like”, but whatever).

All names are memes

Ultimately, all names are memes. They evoke diverse feelings from their related clichés. The clichés can relate to country, region, language, age, gender, class, or any other arbitrary thing, and any combination thereof.

The fact that names are memes likely isn’t a great thing—all the above assumptions can be harmful in their own ways—but it’s also probably unavoidable. If most people with the name Thijs are Dutch men, then it follows that people are going to take note of this. And sometimes, someone becomes famous enough that they become the prime instance of their name: it’s difficult to imagine a Napoléon who isn’t the 19th-century leader of France.

More interestingly, though, we ourselves memeify our names through existence. If you’re the only person with a certain name inside of a community, then all members of that community will subconsciously base their associations with that name on you. You effectively become the prime instance of that name within your community.

Memes don’t change; people do

A trait of memes—certainly popular ones—is that they are incredibly well-polished. Like a Platonic Form, a meme embodies the essence of something very specific at the intersection of all of its instances. Because of this, memes very rarely change in their meanings. So what to do when an instance of the meme changes?

Although the journeys of trans people all wildly vary, I’ve yet to meet a trans person whose journey did not include an almost unbearable amount of introspection. A deep investigation of the self, not just as who you are, but who you want to be. And inevitably, you end up asking yourself this question: “If you could be anybody at all, who would you want to be?”

Invariably the answer is something along the lines of “myself, but different”. For trans people, this involves a change in gender presentation, and society mandates that people who present a certain gender should have a name that reflects that. So we choose our own name. With any luck, we choose a cool name.

But what if we extended that line of thinking? Going through a period of introspection and coming out of it a different person is not something that is entirely unique to trans people. We ultimately only get to occupy a single body in this life, so we might as well make that person resemble the kind of person we would really fancy being. So why not change your name? Get rid of the old meme, and start a new one.

Now, understandably, we cannot change into anybody at all. There are limits to our mutability, although those limits are often broader than our imagination. Furthermore, we may never become the perfect person we envision when we close our eyes. And that’s okay, but we can get a little closer. And for me, the awareness of the mutability of names excites the awareness of the mutability—and consequently the potential for improvement—of people.

Happy International Transgender Day of Visibility.

Sunday, 28 March 2021

Saturday, 13 March 2021

KDE Gear 21.04 releases branches created

 Make sure you commit anything you want to end up in the 21.04 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 18 of March.

More interesting dates
   April  8: 21.04 RC (21.03.90) Tagging and Release
   April 15: 21.04 Tagging
   April 22: 21.04 Release

https://community.kde.org/Schedules/KDE_Gear_21.04_Schedule

Okular: Should continuous view be an okular setting or a document setting?

In Okular:

 

Some settings are okular wide, if you change them, they will be changed in all the future okular instances, an easy example is if you change the shortcut for saving from Ctrl+S to Ctrl+Shift+E. 

 

Some other settings are document specific, for example zoom, if you change the zoom of a document it will only be restored when opening the same document again, but not if you open a different one. There's also a "default zoom value for documents you've never opened before" in the settings.


Some other settings like "Continuous View" are a bit of a mess and are both. "Continuous View" wants to be a global setting (i.e. so that if you hate continuous view you always get a non continous view) but it is also restored to the status it had when you closed the document you're just opening.


That's understandably a bit confusing for users :D


My suggestion for continuous view would be to make it work like Zoom, be purely document specific but also have a default option in the settings dialog for people that hate continous view.

 

I'm guessing this should cover all our bases?

 

Opinions? Anything i may have missed?

Thursday, 04 March 2021

Is okular-devel mailing list the correct way to reach the Okular developers? If not what do we use?

After my recent failure of gaining traction to get people to join a potential Okular Virtual Sprint i wondered, is the okular-devel mailing list representative of the current okular contributors?

 

Looking at the sheer number of subscribers one would think that probably. There's currently 128 people subscribed to the okular-devel mailing list, and we definitely don't have that many contributors, so it would seem the mailing list is a good place to reach all the contributors, but let's look at the actual numbers.

 

Okular git repo has had 46 people contributing code[*] in the last year.


Only 17% of those are subscribed to the okular-devel mailing list.


If we count commits instead of commiters, the number raises to 65% but that's just because I account for more than 50% of the commits, if you remove myself from the equation the number drops to 28%.


If we don't count people that only commited once (thinking that they may not be really interested in the project), the number is still at only 25% of commiters and 30% of commits (ignoring me again) subscribed to the mailing list.


So it would seem that the answer is leaning towards "no, i can't use okular-devel to contact the okular developers".


But if not the mailing list? What am i supposed to use? I don't see any other method that would be better.


Suggestions welcome!



[*] Yes I'm limiting contributors to git commiters at this point, it's the only thing i can easily count, i understand there's more contributions than code contributions

Saturday, 20 February 2021

How to build your own dyndns with PowerDNS

I upgraded my home internet connection and as a result I had to give up my ~15y Static IP. Having an ephemeral Dynamic IP means I need to use a dynamic dns service to access my homepc. Although the ISP’s CPE (router) has a few public dynamic dns services, I chose to create a simple solution on my own self-hosted DNS infra.

There are a couple of ways to do that, PowerDNS supports Dynamic Updates but I do not want to open PowerDNS to the internet for this kind of operations. I just want to use cron with a simple curl over https.

PowerDNS WebAPI

to enable and use the Built-in Webserver and HTTP API we need to update our configuration:

/etc/pdns/pdns.conf

api-key=0123456789ABCDEF
api=yes

and restart powerdns auth server.

verify it

ss -tnl 'sport = :8081'
State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
LISTEN      0       10      127.0.0.1:8081             *:*

WebServer API in PHP

Next to build our API in PHP

Basic Auth

By using https means that the transport layer is encrypted so we only need to create a basic auth mechanism.

<?php
  if ( !isset($_SERVER["PHP_AUTH_USER"]) ) {
      header("WWW-Authenticate: Basic realm='My Realm'");
      header("HTTP/1.0 401 Unauthorized");
      echo "Restricted area: Only Authorized Personnel Are Allowed to Enter This Area";
      exit;
  } else {
    // code goes here
  }
?>

by sending Basic Auth headers, the _SERVER php array variable will contain two extra variables

$_SERVER["PHP_AUTH_USER"]
$_SERVER["PHP_AUTH_PW"]

We do not need to setup an external IDM/LDAP or any other user management system just for this usecase (single user access).

and we can use something like:

<?php
  if (($_SERVER["PHP_AUTH_USER"] == "username") && ($_SERVER["PHP_AUTH_PW"] == "very_secret_password")){
    // code goes here
  }
?>

RRSet Object

We need to create the RRSet Object

here is a simple example

<?php
  $comments = array(
  );

  $record = array(
      array(
          "disabled"  => False,
          "content"   => $_SERVER["REMOTE_ADDR"]
      )
  );

  $rrsets = array(
      array(
          "name"          => "dyndns.example.org.",
          "type"          => "A",
          "ttl"           => 60,
          "changetype"    => "REPLACE",
          "records"       => $record,
          "comments"      => $comments
      )
  );

  $data = array (
      "rrsets" => $rrsets
  );

?>

by running this data set to json_encode should return something like this

{
  "rrsets": [
    {
      "changetype": "REPLACE",
      "comments": [],
      "name": "dyndns.example.org.",
      "records": [
        {
          "content": "1.2.3.4",
          "disabled": false
        }
      ],
      "ttl": 60,
      "type": "A"
    }
  ]
}

be sure to verify that records, comments and rrsets are also arrays !

Stream Context

Next thing to create our stream context

$API_TOKEN = "0123456789ABCDEF";
$URL = "http://127.0.0.1:8081/api/v1/servers/localhost/zones/example.org";

$stream_options = array(
    "http" => array(
        "method"    => "PATCH",
        "header"    => "Content-type: application/json \r\n" .
                        "X-API-Key: $API_TOKEN",
        "content"   => json_encode($data),
        "timeout"   => 3
    )
);

$context = stream_context_create($stream_options);

Be aware of " \r\n" . in header field, this took me more time than it should ! To have multiple header fiels into the http stream, you need (I don’t know why) to carriage return them.

Get Zone details

Before continue, let’s make a small script to verify that we can successfully talk to the PowerDNS HTTP API with php

<?php
  $API_TOKEN = "0123456789ABCDEF";
  $URL = "http://127.0.0.1:8081/api/v1/servers/localhost/zones/example.org";

  $stream_options = array(
      "http" => array(
          "method"    => "GET",
          "header"    => "Content-type: application/jsonrn".
                          "X-API-Key: $API_TOKEN"
      )
  );

  $context = stream_context_create($stream_options);

  echo file_get_contents($URL, false, $context);
?>

by running this:

php get.php | jq .

we should get the records of our zone in json format.

Cron Entry

you should be able to put the entire codebase together by now, so let’s work on the last component of our self-hosted dynamic dns server, how to update our record via curl

curl -sL https://username:very_secret_password@example.org/dyndns.php

every minute should do the trick

# dyndns
* * * * * curl -sL https://username:very_secret_password@example.org/dyndns.php

That’s it !

Tag(s): php, curl, dyndns, PowerDNS

Panel - Free Software development for the public administration

On 17 February I participated in a panel discussion about opportunities, hurdles with and incentives for Free Software in the public administration. The panel was part of the event "digital state online", focusing on topics like digital administration, digital society and digital sovereignty. Patron of the event is the German Federal Chancellor's Office. Here a quick summary of the points I made and some quick thoughts about the discussion.

The "Behördenspiegel" meanwhile published the recordings of the discussion moderated by Benjamin Stiebel (Editor, Behörden Spiegel) with Dr. Hartmut Schubert (State Secretary, Thuringian Ministry of Finance), Reiner Bamberger (developer at Higher Administrative Court of Rhineland-Palatinate), Dr. Matthias Stürmer (University of Bern), and myself (as always you can use youtube-dl).

We were asked to make a short 2 minutes statement at the beginning in which I focused on three theses:

  • The public administration actions are more intransparent than it used to be due to digitalization. We need to balance this.
  • The sharing and re-use of software in public administrations together with small and medium-sized enterprises must be better promoted.
  • Free Software (also called Open Source) in administration is important in order to maintain control over state action and Free Software is an important building block of a technical distribution of powers in a democracy in the 21st century.

Furthermore, the "Public Money? Public Code!" video was played:

In the discussion, we also talked about why there is not yet more Free Software in public administrations yet. State Secretary Dr. Schubert point was he is not aware about legal barriers and that the main problem is in the implementation phase (as it is the case for policies most of the time). I still mentioned few hurdles here:

  • Buying licences is still sometimes easier than buying services and some public administrations have budgets for licences which cannot be converted to buy services. This should be more flexible; and it was good to hear from State Secretary Dr. Schubert that they changed this in Thuringia.
  • Some proprietary vendors can simply be procured from framework contracts. For Free Software that option is often missing. It could be helpful if other governments follow the example of France and provide a framework contract which makes it easy to procure Free Software solutions - including from smaller and medium-sized companies.

One aspect I noticed in the discussion and the questions we received in the chat: Sometimes Free Software is presented in a way that in order to use it, public administration would have to look at code repositories and click through online forums in order to find out specifics of the software. Of course, they could do that, but they could - as they do for proprietary software as well and as it is the more common if you do not have in-house contributors - simply write in an invitation to tender that a solution must for example be data-protection compliant, that you want the rights to use, study, share, and improve the software for every purpose. So as public administration you not have to do such research yourself, as you would have to do for your private hobby project, but you can - and depending on the level of in-house expertise often really should - involve external professional support to implement a Free Software solution. This can be support from other public administrations or from companies providing Free Software solutions (on purpose I am not writing "Free Software companies" here, for further details see the previous article "There is no Free Software company - But!").

We also still need more statements by government officials, politicians, and other decisions makers why Free Software is important. Like in the recent months in Germany the conservative CDU's party convention resolution on the use of Free Software or the statement by the German Chancellor Merkel about Free Software below. This is important so that people in the public administration who want to move to more Free Software can better justify and defend their actions. In order to increase the speed for more digital sovereignty, decision makers need to reverse the situation. It should not be "nobody gets fired for buying Microsoft" to "nobody gets fired for procuring Free Software".

I also plead for a different error culture in the public administration. Experimentation clauses would allow to be test innovative approaches without every bad feedback immediately suggesting that a project has to be stopped. We should think about how to incentivize the sharing and reuse of Free Software. For example if public administrations document good solutions and support others in benefiting from those solutions as well could they get a budget bonus for that for future projects? Could we provide smaller budgets which can be more flexible used to experiment with Free Software, e.g. by providing small payments to Free Software offers even if they do not yet meet all the criteria to use it productively for the tasks envisioned.

One point we also briefly talked about was centralization vs decentralization. We have to be careful that "IT consolidation" efforts do not lead to a situation of more monopolies and more centralization of powers. For Germany, I argued that the bundling of IT services and expertise in some authorities should not go that far that federal states like Thuringia or other levels and parts of government lose their sovereignty and are dependent on a service centre controlled by the federal government or another part of the state. Free Software provides the advantage that for example the federal state of Bavaria can offer a software solution for other federal states. But if they abuse their power over this technology, other federal states like Thuringa could decide to host the Free Software solution themselves, and contract a company to make modifications, so they can have it their way. The same applies for other mechanisms for distribution of power like the separation between a legislature, an executive, and a judiciary. All of them have to make sure their sovereignty is not impacted by technology - neither by companies (as more often discussed) nor by other branches of government. For a democracy in the 21st century such a technological distribution of power is crucial.

PS: In case you read German, Heise published an article titled "The public administration's dependency on Microsoft & Co is 'gigantic'" (in German) about several of the points from the discussion. And if you do not know it yet, have a look at the expert brochure to modernise public digital infrastructure with public code, currently available in English, German, Czech, and Brazilian Portuguese.

Sunday, 14 February 2021

My history with free software – a story told on #ilovefs day

In October 2019, I went to Linuxhotel in Essen, as I had been invited to attend that year’s General Assembly in the FSFE as a prospective member. I had a very enjoyable weekend, where I met new people and renewed older acquaintances, and it was confirmed to me what good and idealistic people are behind that important part of the European free software movement.

On the photo you see Momo, the character from Michael Ende’s eponymous novel – a statue which I was delighted to see, given that “Momo”  has been one of my favorite children’s novels for decades.

I first met the concept of free software at the university, as a student of physics and computer science in the early nineties. As students, we had to work on  the old proprietary SunOS and HP-UX systems; we had to use the Emacs editor and GNU C compiler (thanks, Richard Stallman and GNU team!) as well as the LaTeX text processing system (thanks, Donald Knuth and Leslie Lamport!)

Some of my fellow students were pretty interested in the concepts of free software and the struggle against software patents, but not me – to be honest, at the time I  was not interested in software or computing at all. Computer science to me was mainly algorithmics and fundamental concepts, invariants and termination functions (thanks, Grete Hermann!) as well as Turing machines, formal languages and the halting theorem (thanks, Alan Turing and Noam Chomsky!). The fact that computer programs could be written and run was, I thought,  mainly a not very interesting byproduct of these intellectual pursuits. In my spare time I was interested in physics as well as in topics more to the “humanities” side – I spent a lot of afternoons studying Sanskrit and Latin and, at a time, even biblical Hebrew, and read Goethe’s Faust and Rabelais’ Gargantua and Pantagruel in the original languages. My main, overarching interests these years were art in the widest sense, epistemology (specifically, the epistemology of physics) and the history of religon. I also read a lot of comic books and science fiction novels.

After leaving the university, however, I got a job as a developer and worked mainly with internal software at a huge corporation and with proprietary software at a major vendor to the newspaper industry. It was at that time, in 2005, that I once again stumbled over the concept of free software – as explained by Richard Stallman – and started using GNU/Linux at home (Ubuntu, thanks, Mark Shuttleworth, Ian Murdoch, Linus Torvalds, Ingo Molnar and everybody else involved in creating Ubuntu and its building blocks Linux and Debian!)

I suddenly realized as someone that had become interested in software and its potential impact on society, that Stallman’s analysis of the situation is correct: If we want to build society’s infrastructure on software – and that seems to be happening – and we still want a free society, software must be free in the FSF sense – users, be they individuals, public authorities or corporations, must have the four freedoms. If this is not the case, all of these users will be at the mercy of the software vendors, and this lack of transparency in the public infrastructure may, in turn, be used by governments and corporations to oppress users – which has happened time and time again.

Free software enables users (once again – be they governments, companies or actual people) to protect themselves against this kind of abuse and gives them the freedom to understand and participate in the public infrastructure. By allowing changing and redistributing software products it also reverses the power relations, giving the users the power they should have and ensures that vendors can no longer exploit monopolies as their private money printing machines (no thanks, Microsoft, for that lesson !)

After discovering the concept of free software and the fact that I could practically use free software only in my daily existence, I started  blogging and communicating about it – a lot.

In 2009, a friend of mine started the “Ubuntu Community Day”, an initiative to further the use of Ubuntu in Aarhus, Denmark, and give back to the community that created this OS.

As such, in 2010 we helped co-found Open Space Aarhus, along with a group of hardware hackers. After some years, this group petered out, and I had discovered the FSFE and become a Fellow (that which now is called Supporter). As such, I was more interested in addressing real political work for free software than in Ubuntu advocacy (as good a path as this is for getting people acquainted with free software), and in 2012 I started an FSFE Local Group in Aarhus, with regular meetings in the hacker space. This group existed until 2015, where we co-organized that year’s LibreOffice conference (thanks to Florian Effenberger and Leif Lodahl and everyone else involved!) but ended up stopping the meetings, as I had become busy with other, non-software related things and none of the other members were ready to assume the responsibility of running it.

As mentioned, when I discovered free software, I was in a job writing proprietary software. While I could live with that as long as the salary was good and i was treated well, making a living from free software  had become not only a dream, it now seemed to be the best  and possibly the only ethically defensible way of working with software.

It also became clear to me that while we may admire volunteer-driven and community-driven free software projects, these are not enough if software freedom is to become the accepted industry standard for all software, as is after all the goal. Some software may be fun to write, but writing and maintaining domain-specific software for governments and large corporations to use in mission-critical scenarios is not fun – it is work, and people will not and cannot do it in their spare time. We need actual companies supplying free software only, and we need many of them.

After some turbulence in my employment, including a period of unemployment in the wake of the financial crisis in 2009, in 2011 I joined my current employer, Magenta ApS. This is the largest Scandinavian company producing only free software – a software vendor  that has never delivered any product to any customer under a proprietary license and has no plans to do this either, ever. With 40 employees, we are currently selling products to local governments and similar organizations that used to be the sole province of huge corporations with shady ethical practices – and I’m proud to say that this means that in our daily jobs, we’re actually changing things to the benefit of these organizations, and of Danish society at large. (Thanks to Morten Kjærsgaard for founding the company and being its motor and main driving force for all these years!)

And in the FSFE General Assembly  of 2020, I was confirmed as a permanent member of the GA. I’d like to thank the founders of the FSFE and the community around it (too many to list all the names, so y’all are included!) for this confidence and hope to be able to continue contributing to free software, a cause I discovered 15 years ago,  for at least the next 15 years as well.

Shelter - take a break from work

On today's "I love Free Software Day" I would like to thank "PeterCXY" and others who contributed to shelter.

Until recently I have used two separate phones: one for the FSFE and one privately. The reason is that I prefer to have the ability to switch off the work phone when I do not want to be available for work but focus on my private life. After the FSFE phone did not get further security updates for a long time I was facing the decision: should I get a new phone for work -- but waste resources for the hardware -- or should I continue to use the old one with the known security issues?

Thanks to Torsten Grote, the FSFE volunteer who started the FSFE's Free Your Android campaign, I was made aware about another option: use shelter which is leveraging the work profile feature in Android. With this solution I have the latest security updates from my private phone and have the ability to easily switch off the work applications.

Screenshot of apps in work profile

You just clone installed apps into the work profile. If you do not also use them privately remove them from your personal profile afterwards. Once that is setup, you can disable notifications from all those apps by pausing the work profile.

Screenshot of paused work profile

This is just one of the use cases of shelter, you can also use it to

  1. Isolate apps, which you do not trust (e.g. if you are forced to use some proprietary apps on your phone) so they cannot access your data / files outside the profile

  2. Disable "background-heavy, tracking-heavy or seldom-used apps when you don't need them."

  3. Clone apps to use two accounts on one device. Something many people asked about for messenger apps which do not allow to setup more than one account (like most Matrix or XMPP clients) in one instance of the app.

If you want to read more about it and speak German, I can recommend the shelter tutorial by Moritz Tremmel, unfortunately I have not yet found something comparable in English yet.

So a big thank you to PeterCXY and others contributing to the Free Software app shelter! Please keep up your work for software freedom!

Shelter logo

Beside that thanks to countless other Free Software contributors who work on other components of that setup:

  • CalxyOS: for providing an operating system which you can also recommend to non-tech-savvy people, without being their point of support afterwards.
  • LineageOS: for providing builds to liberate your phones on many devices.
  • Replicant: for working hard to remove and replace proprietary components from Android phones.
  • F-Droid: for making it easy to install shelter as well as many other apps on liberated phones.
  • OpenMoko: for doing the pioneer work for Free Software on mobile phones
  • Librem phone, Pine Phone, Ubuntu Phone, and others who are working on non-android Free Software solutions for phones.
  • Finally: Torsten Grote and many other FSFE volunteers who helped people to liberate their phones with the FSFE's Free Your Android project.

Friday, 12 February 2021

Destination status quo

I recently happened upon an article1 that argued against the four freedoms as defined by the Free Software Foundation. I don’t actually want to link to the article—its tone is rather rude and unsavoury, and I do not want to end up in a kerfuffle—but I’ll include an obfuscated link at the end of the article for the sake of integrity.

The article—in spite of how much I disagree with its conclusions—inspired me to reflect on idealism and the inadequacy of things. Those are the things I want to write about in this article.

So instead of refuting all the points with arguments and counter-arguments, my article is going to work a little differently. I’m going to concede a lot of points and truths to the author. I’m also going to assume that they are ultimately wrong, even though I won’t make any arguments to the contrary. That’s simply not what I want to do in this article, and smarter people than I have already made a great case for the four freedoms. Rather, I want to follow the author’s arguments to where they lead, or to where they do not.

The four freedoms

The four freedoms of free software are four conditions that a program must meet before it can be considered free. They are—roughly—the freedoms to (1.) use, (2.) study, (3.) share, and (4.) improve the program. The assertion is that if any of these conditions is not met, the user is meaningfully and helplessly restricted in how they can exercise their personal liberties.

The aforementioned article views this a little differently, however. Specifically, I found its retorts on the first and second freedoms interesting.

The first freedom

The first freedom—in full—is “the freedom to run the program as you wish, for any purpose”. The retort goes a little like this, using generous paraphrasing:

The freedom to use the program for any purpose is a meaningless freedom. It is the programmer—and by extension, the program—that determines the parameters of the program’s purpose. If it is the program’s purpose to display image files, then try as you might, but it’s not going to send any e-mails. Furthermore, the “free” program might even contain purposes that you find undesirable whereas a “non-free” program might not.

There’s one very interesting thing about this retort. You see, the author doesn’t actually make any factual errors on the face of it. An image viewer cannot send e-mails, and some non-free programs exhibit less undesirable behaviours than their free counterparts. These things are true, but truths is all they are. Something is missing…

The second freedom

In the article, the author points towards one free program that they consider harmful or malicious, accusing it of containing a lot of anti-features. Furthermore, they emphasise the uselessness of the second freedom to study and change the program. Paraphrased:

Even supposing that you have the uncommon ability to read and write source code, this program is so thoroughly complex that you can’t meaningfully study or alter it. And supposing that you do manage to adjust the program, you would have to keep pace with upstream’s updates, which is such a labour-intensive, nigh-insurmountable task that you might end up wondering what use this freedom is to you if it’s practically impossible to make any use of it.

The author goes on to add that there exist other, better programs that achieve approximately the same thing as this malicious free program, but without the anti-features. To the author, the fact that the malicious program is free is a useless distinction—it’s malicious, after all—and they could not care one way or another whether the better program is free or not. They care that it’s better. What use is freedom if you have to suffer worse programs?

And, you know, the author is right—again. All the logic follows. But this is all very matter-of-fact. It’s stating the obvious. It’s repeating the way that things are, and concluding that that’s how it is. Which it is. And there may well be a lot to this state of affairs. But, well, that’s all it is. There’s nothing more to it than that which is, in fact, the case.

Am I losing you yet?

An intermission: the future is behind us

In some languages, when people use gestures to aid in speech, they gesture ahead of them to signal the past, and behind them to signal the future. If you’re like me, this may seem absurd. Surely the past is behind us and the future ahead of us. We move forward through time, don’t we? Ever onward.

But then you look at the English language, and it starts to make a modicum of sense. We use the words “before” and “behind/after” both in spatial and temporal contexts. If we imagine a straight narrow hallway containing me and a cat, and I am looking towards the cat, then we would say that the cat is standing before me. If we also imagine a straight line of time, and the cat jumps first, and I jump second, then we would also say that the cat jumped before I did.

/img/catspacetime.png

The above graphic should make this idea a little clearer. In both the perspectives, I am looking at the before. As a matter of fact, in the temporal perspective, it’s the only perceivable direction. If the cat turns around in the hallway, it can see me. If the cat turns around in the timeline, it can see nothing—just the great uncertainty of the future that has not yet happened. It needs to wait until I jump in order to perceive it, by which time it’s looking towards the past—the before—again.

The future is necessarily behind us, after us.

Staring at one’s feet

Let’s create an analogy and stretch it beyond its limits. If we place the author on the aforementioned timeline, then I’m going to assert that the author is neither looking ahead towards the past to learn from history, nor turns their head to look behind to the future to imagine what it could look like. Rather, they are looking down, staring at their feet on the exact spot in time which they occupy.

There’s a Machiavellian immediacy to the author’s arguments—faced with a certain set of circumstances, it’s the author’s responsibility to choose what’s best at this moment in time. But the immediacy is also immensely short-sighted. The article contains no evaluation of the past—no lessons drawn from the past abuses of non-free software—and the article neither contains a coherent vision of the future. If not free software, then what?

Better software, the author responds.

A stiff neck

“If I had asked people what they wanted, they would have said faster horses” is a quote misattributed to Henry Ford of automobile fame. It’s often used to emphasise that people don’t actually know what they want, but I like it more as a metaphor for the blindness towards the future and an inability to imagine. People don’t know what could be, so when prompted about a better future, their response is effectively “like the status quo, but better”.

This sentiment is echoed in Mark Fisher’s Capitalist Realism: Is There No Alternative? (2009). The book talks about “the widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it”. “It is easier to imagine an end to the world than an end to capitalism”, Fisher attributes to Fredric Jameson and Slavoj Žižek.

In this sense, I think that horses and capitalism are analogous to non-free software. There exists a time before all these things, and there exists a time after them. But because our backs are turned to the future, we can’t see it—we must imagine it.

This strikes at the heart of why the author inspired me to write this article. To me, the author demonstrates a rigid, stubborn incapability of turning their head and imagining a future that isn’t just the present, but better. The author tackles an ideology—a way of imagining a different, better future—without ever lifting their head from staring at their feet.

And that’s fascinating.

Painting in the mind’s eye

Now, suppose I could visit the author and (gently!) turn their head to face the future. Without intending to come across as insulting, I’m not entirely certain that the author would see anything other than a void. Of course, seeing a void is entirely reasonable—when we turn our eyes to the future, we’re trying to see something that does not exist.

Looking at the future, therefore, is an exercise in creativity. The void is a potentially blank canvas on which one can paint. And like painting, it’s easiest to paint something you’re currently looking at, harder to paint something from memory, and hardest to paint something you’ve never before seen. And frankly, it’s really uncomfortable to twist your neck to look behind you.

But sometimes, even simply lifting one’s head feels like a struggle. Because the past isn’t any different from the future in one important aspect—it is not immediately perceptible. We can look at the present by opening our eyes, but we need either artefacts or imagination to paint the past. The fewer artefacts or memories we have, the harder it becomes to perceive the past.

La langue anglaise

If you’re reading this, chances are you’re invested in software freedom, and you don’t exactly struggle to see the common vision of a future with free software. But I want to nonetheless try to demonstrate how difficult it is to see something that does not yet exist, and how difficult it is to remember that the past exists when considering a change to the status quo. Furthermore, I want to demonstrate why someone might be hostile to our painting of the future even if they were able to see it.

This article is written in English. English, of course, is the common international language. Everybody—or at least everybody with an interest in interacting with the globe—speaks it. Surely. And what a boon this language is to the world. We can all talk to each other and understand each other, and the language is rich and flexible and– why are you looking at the Wikipedia page on English orthography? Why are you looking at the Wikipedia page on the British Empire?

You see, the English language isn’t exactly great, and comes with a lot of disadvantages. Its spelling is atrocious, the vocabulary is bigger than it has any right being, it unfairly gives native speakers an advantage, it unfairly gives countries that use the English language as an official language extra cultural influence, and some people might deservedly have some opinions on using the language of their oppressors.

Now, suppose we could imagine any future at all. Would we like to keep English as the common tongue? Well, there surely are some disadvantages to doing this. All of modern engineering—modern life—is built on top of English, so we’d have to convert all of that. It would also inevitably make art and resources from this time period less accessible. And, you know, people would have to learn a different language. Who has time for that? We’ve already settled on English, so we might as well ride it out.

These are all arguments from the status quo, however. If we equate English to non-free software, and a better auxiliary language to free software, then these arguments are the equivalent of saying that you really just want to do X, and the non-free program is simply better at doing X. Besides, you’re already using this non-free program, and it would be a hassle to switch. This line of thought is incapable of imagining a better future, and dismissive of morality. The morality of the matter was never even addressed (although I realise that I am writing my own strawman here!).

Furthermore, the arguments were entirely dismissive of the past. I take great joy in the knowledge that English is today’s lingua franca, meaning “French language” in Latin. Latin and French were, of course, the common tongues in their respective time periods. The time of the French language wasn’t even that long ago—barely over a century ago! So we’ve obviously switched languages at least twice before, and the world hasn’t at all ended, but it still seems so unthinkable to imagine a future without English.

It is easier to imagine an end to the world than an end to the English language.

Solidarity

Here’s the frustrating thing—you don’t even need to be able to see the future nor participate in its creation to be sympathetic to the idea that change might be desirable, or to acknowledge that a problem exists. It is entirely feasible to say that non-free software isn’t a great status quo even if you still depend on it. The least that can be asked of you is to not stand in the way of progress, and the most that can be asked is that you participate in bringing about change, but none of these are necessary for a simple act of solidarity.

So too it goes with many other things. There are a great many undesirable status quos in the world, and I don’t have the energy or capacity to look over my shoulder to imagine potential better futures for all of them, and I most certainly don’t have the capacity to participate in effecting change for all of them, but I do my bit where I can. And more importantly, I try not to get in the way.

If the wheels of time err ever on the side of progress, then one day we’ll live in a post-proprietary world. And if the wheels keep churning as they do, the people of the future will see the free software advocates of today as regressive thinkers that were at least moderately better than what came before, but worse than what came after.

In the meantime, I’m still not sure what to do about people who are staring at their feet, but at least I slightly understand where they’re coming from and where they’re headed—destination status quo.


  1. <https colon slash slash digdeeper dot neocities dot org slash ghost slash freetardism dot html> ↩︎

Friday, 05 February 2021

Using Fedora Silverblue for development

I recently switched to Fedora Silverblue for my development machine. I want to document approximately how I do this (and why it’s awesome!).

Fedora Silverblue

This article is not an introduction to Fedora Silverblue, but a short summary is well-placed: Fedora Silverblue is an immutable operating system that upgrades atomically. Effectively, the root filesystem is mounted read-only, with the exception of /var, /home, and /etc. The system is upgraded by mounting a new read-only snapshot as the root filesystem.

There are three methods of installing software on Fedora Silverblue:

  • Expanding the immutable base image using rpm-ostree. The base image is intended to be reserved for system components, but you can technically put anything in this image.

  • Installing graphical applications using Flatpak. This is sometimes-sandboxed, sometimes-not-so-sandboxed, but generally quite stable. If a layman were to install Fedora Silverblue, this would be the only installation method they would care about, aside from updating the base image when prompted.

  • Installing CLI tools using toolbox. This is a Podman (read: Docker) container of Fedora that mounts the user’s home directory. Because it’s a Podman image, you can install any RPM using Fedora’s package manager, DNF.

This article by Fedora Magazine goes into slightly more detail.

The basic development workflow

Instead of littering the base operating system with all means of development tools, development takes place within a toolbox. This looks a little like:

carmenbianca@thinkpad-x395 ~ $ ls
Elŝutoj  Labortablo  Nextcloud  Projektoj  Publike  Ŝablonoj
carmenbianca@thinkpad-x395 ~ $ toolbox create my-project
Created container: my-project
Enter with: toolbox enter my-project
carmenbianca@thinkpad-x395 ~ $ toolbox enter my-project
⬢[carmenbianca@toolbox ~]$
⬢[carmenbianca@toolbox ~]$ ls
Elŝutoj  Labortablo  Nextcloud  Projektoj  Publike  Ŝablonoj
⬢[carmenbianca@toolbox ~]$ # We're still in the same directory, which means[carmenbianca@toolbox ~]$ # that we still have our GPG and SSH keys, and[carmenbianca@toolbox ~]$ # other configuration files![carmenbianca@toolbox ~]$
⬢[carmenbianca@toolbox ~]$ sudo dnf groupinstall "Development Tools"
[...]

The nice thing now is that you now have full freedom to mess with absolutely anything, carefree. If your program touches some files in /etc, you can mess with those files without affecting your operating system. If you want to test your program against a custom-built glibc, you can simply do that without fear of breaking your computer. At worst you’ll break the toolbox, from which you can easily recover by recreating it. And if you need more isolation from e.g. your home directory, you can do that inside of a non-toolbox Podman container.

The editor problem

There is a slight problem with the above workflow, however. Unless you install your code editor inside of the toolbox, the editor has no access to the development tools you’ve installed. Instead, the editor only has access to the tools that are available in the base system image, or the editor only has access to the tools in its Flatpak runtime.

There are several ways to get around this, but they’re dependent on the editor you use. I’ll document how I circumvented this problem.

VSCodium in a Flatpak

I use the Flatpak version of VSCodium. VSCodium has an integrated terminal emulator. Unfortunately the Flatpak shell is rather extremely barebones—it doesn’t even have vi! Fortunately, we can tell VSCodium that we want to run a different program as our shell. Change settings.json to include:

  [...]
  "terminal.integrated.shell.linux": "/usr/bin/env",
  "terminal.integrated.shellArgs.linux": [
    "--",
    "flatpak-spawn",
    "--host",
    "toolbox",
    "enter",
    "main-toolbox"
  ],
  [...]

main-toolbox here is an all-purpose toolbox that has heaps of tools installed. You can adjust these settings on a per-workspace or per-project level, so a given project might use a different toolbox than main-toolbox.

The way the above command works is a little roundabout. It breaks out of the flatpak and into the base installation using flatpak-spawn --host, and then enters a toolbox using toolbox enter. The end result is that you have an integrated terminal with a functional and feature-plenty shell.

This doesn’t solve everything, however. The editor itself also has some integration such as running tests. Because I mainly do Python development, it is fairly easy to bootstrap this functionality. The Flatpak runtime ships with Python 3.8. This means that I can create a Python 3.8 virtualenv and tell VSCodium to use this virtualenv for all of its Python stuff, which ends up working out just fine. The virtualenv is shared between the Flatpak and the toolbox, because both environments have access to the same file system.


For non-Python endeavours, Flatpak SDK extensions can be installed. This looks a little like:

$ flatpak install flathub org.freedesktop.Sdk.Extension.dotnet
$ flatpak install flathub org.freedesktop.Sdk.Extension.golang
$ FLATPAK_ENABLE_SDK_EXT=dotnet,golang flatpak run com.vscodium.codium

The final problem I ran into was using VSCodium to edit my Git commit messages. VSCodium has a tiny box in the UI where you can write commit messages, but it’s rather tiny and fiddly and not great. So instead I do git config --global --add core.editor 'codium --wait'. This should launch codium --wait path/to/git/commit/message when git commit is run, allow VSCodium to edit the message, and wait until the file is saved and exited to evaluate the message.

The problem is that codium only exists as an executable inside of the Flatpak. It does not exist in the toolbox or the base operating system.

I circumvented this problem by creating a custom script in .local/bin/codium. It tries to detect its current environment by checking whether a certain file exists, and tries to use that environment’s method of accessing the VSCodium Flatpak. This looks like:

#!/bin/bash

# Will still need to pass "--wait" to make it behave nicely.
# In order to get --wait to work in a Flatpak, run
# `flatpak override --user --env=TMPDIR=/var/tmp com.vscodium.codium`.

if [ -f /app/bin/codium ]
then
    exec /app/bin/codium "$@"
elif [ -f /usr/bin/codium ]
then
    exec /usr/bin/codium "$@"
elif [ -f /usr/bin/flatpak-spawn ]
then
    exec /usr/bin/flatpak-spawn --host flatpak run com.vscodium.codium "$@"
elif [ -f /usr/bin/flatpak ]
then
    exec /usr/bin/flatpak run com.vscodium.codium "$@"
else
    for arg do
        shift
        [ "$arg" = "--wait" ] && continue
        set -- "$@" "$arg"
    done
    exec vi "$@"
fi

This allows you to run codium [--wait] from anywhere and expect a functional editor to pop up. There’s a fallback to vi, which I’ve not yet hit.

I hope this helps anybody looking for solutions to these problems :) It’s more than a little bit of bother, but it’s incredibly reassuring to know that your operating system won’t ever break on you.

Thursday, 04 February 2021

21.04 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/21.04_Release_Schedule

Dependency freeze is in five weeks (March 11) and Feature Freeze a week after that, make sure you start finishing your stuff!

Tuesday, 26 January 2021

ROC and Precision-Recall curves - How do they compare?

The accuracy of a model is often criticized for not being informative enough to understand its performance trade offs. One has to turn to more powerful tools instead. Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves are standard metrics used to measure the accuracy of binary classification models and find an appropriate decision threshold. But how do they relate to each other?

What are they for?

Often, the result of binary classification (with a positive and negative class) models is a real number ranging from 0 to 1. This number can be interpreted as a probability. Above a given threshold, the model is considered to have predicted the positive class. This threshold often defaults to 0.5. While sound, this default may not be the optimal value. Fine-tuning it can impact the balance between false positives and false negatives, which is especially useful when they don’t have the same importance. This fine-tuning can be done with ROC and PR curves, and is also useful as a performance indicator.

How to make a ROC and PR curve?

Both curves are based on the same idea: measuring the performance of the model at different threshold values. They differ on the performance measures. The ROC curve measures both the ability of the model to correctly classify positive examples and the ability of the model to minimize false positive errors. On the other hand, the PR curve focuses exclusively on the positive class and ignore correct predictions of the negative class, making it a compelling measure for imbalanced datasets. While the two curves are different, it has however been proved that they are equivalent, because although the true negatives (correct predictions of the negative class) are not taken into account by the PR curve, it is possible to deduce it from the other measures.

Receiver Operating Characteristic (ROC) curve

ROC curves measure the True Positive Rate (among the positive samples, how many were correctly identified as positives), and the False Positive Rate (among the negatives samples, how many were falsely identified as positive):

$$TPR = \frac {TP} {TP + FN}$$

$$FPR = \frac {FP} {FP + TN}$$

A perfect predictor would be able to maximize the TPR while minimizing the FPR.

Precision-Recall (PR) curve

The Precision-Recall curve uses the Positive Predictive Value, precision (among the samples which the model predicted as being positive, how many were correctly classified) and the True Positive Rate (also called recall):

$$PPV = \frac {TP} {TP + FP}$$

A perfect predictor would both maximize the TPR and the PPV at the same time.

ROC and Precision-Recall curves in Python

With scikit-learn and matplotlib (both are Free Software), creating these curves is easy.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
from matplotlib import pyplot as plt
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import plot_roc_curve, plot_precision_recall_curve

X, y = make_classification(n_samples=1000, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, random_state=42, test_size=0.2
)
lr = LogisticRegression().fit(X_train, y_train)

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
plot_roc_curve(lr, X_test, y_test, ax=ax1)
plot_precision_recall_curve(lr, X_test, y_test, ax=ax2)
ax1.set_title("ROC curve")
ax2.set_title("Precision-Recall curve")
fig.suptitle("Comparaison of ROC and P-R curves")
plt.show()
Plot of a ROC and Precision-Recall curves

Plot of a ROC and Precision-Recall curves

Line 7-11 create a sample dataset with a binary target, split it into a training set and a testing set, and train a logistic regression model. The important lines are lines 14 and 15 which automatically compute the performance measures at different threshold values.

How to read the curves?

Both curves offer two useful information: how to choose the positive class prediction threshold and what is the overall performance of the classification model. The former is determined by selecting the threshold which yield the best tradeoff, in adequation with the prediction task and operational needs. The latter is done by measuring the area under the curves which informs about how good the model is, because by measuring the area under the curves, one computes the overall probability that a sample from the negative class has a lower probability than a sample from the positive class.

With scikit-learn, the values can be computed either by using the roc_auc attribute of the object returned by plot_roc_curve() or by calling roc_auc_score() directly for ROC curves and by using the average_precision attribute of the object returned by plot_precision_recall_curve() or by calling average_precision_score() directly for PR curves.

Tuesday, 15 December 2020

AVIF support for KImageFormats just landed

Thanks to Daniel Novomeský we will have support for AVIF images in KImageFormats starting in the next release.


We have (and by we I mean him) also added the avif code to be fuzzed under oss-fuzz so we'll be helping the upstream libavif/libaom to find potential memory issues in their code.


https://invent.kde.org/frameworks/kimageformats/-/merge_requests/8

https://github.com/google/oss-fuzz/pull/4850

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

            Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  English – nico.rikken’s blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nikos Roussos - opensource  Planet FSFE on irl.xyz  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Thoughts of a sysadmin (Posts about planet-fsfe)  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog