Free Software, Free Society!
Thoughts of the FSFE Community (English)

Saturday, 17 September 2022

Come to Barcelona for Akademy-es 2022!

As previously announced, Akademy 2022 will be happening in Barcelona at the beginning of October.

On top of that, Akademy-es [the Spain spin-off of Akademy] is also happening in Barcelona the days before (29 and 30 of September). So if you're interested in KDE and understand Spanish a bit, please drop by and register yourself at


There's also a quite pretty t-shirt you can order but if you want it, you have to register BEFORE the end of the weekend if you want it!


Camiseta Akademy-es

Thursday, 15 September 2022

5 Years of Freedom with OpenPOWER

5 years ago I preorded my Talos II from Raptor Computing Systems. The Talos II is a POWERful system built from the ground up with freedom in mind. In one of its PCIe 4.0 slots, I plugged an AMD Radeon RX 5700 (Navi 10) which I mainly use for playing VR games, but also for multi monitor setups, faster video decoding and many more. Unfortunately all modern graphics cards require non-free firmware, but currently the libre-soc project is developing an OpenPOWER hybrid CPU/VPU/GPU that comes with its own Vulkan drivers.

Currently the next candidate for Respects Your Freedom certification is the Arctic Tern, a BMC development kit for OpenPOWER systems. A prototype libre GPU can be implemented using two FPGAs, each one for one screen, with a resolution of up to 1920×1200. Currently I use an OrangeCrab for my work on libre-soc, I have no need for an Arctic Tern. I also have a BeagleWire, an FPGA development cape for the BeagleBone Black, using an ICE40 FPGA which is also found on the Valve Index and Talos II.

Unlike a modern x86-64, such as the Steam Deck, the Talos II can’t run Steam, so the is no way to play VR games such as Beat Saber, Blade & Sorcery or VRChat. Currenly I can only play the godot4_openxr_demo using Monado and Libsurvice, but I have begun doing a VR port of Minetest, a libre clone of Minecraft and I am also trying to get Godot Beep Saber VR working with my Valve Index using Monado. Currently Beep Saber only works with SteamVR and the Oculus Quest, both non-free platforms incompatible with OpenPOWER systems.

Since I want a mobile VR headset that works without any non-free software, I propose building one using libre-soc and the already existing Monado OpenXR stack. For both projects there is still much work todo. Hopefully the number of libre VR games will grow in the next few years, if more and more people switch to OpenPOWER and ethical distros. Since I avoid both Android and SteamOS, so I won’t buy the Oclulus Quest nor the Steam Deck. Once a libre VR headset exists, it could get Respects Your Freedom certification. In guess that that will be another 5 years.

Sniffing Android apps network traffic

Back in the days, it was really easy to sniff the network traffic made by the Apps in Android. You could do it in a few minutes by adding mitmproxy’s certificate and setting the HTTP proxy on your wifi network settings. That was it. But things have changed (for good) and that’s no longer the case. However, I still want to sniff the network traffic made by the Apps in Android.

How? Well, I can no longer use my smartphone to do it, but I can set up the Android emulator, install the application via the Google Play Store and sniff the network traffic it generates on my PC \o/

Let’s get started. First, install the Android SDK and create an Android virtual device using Android API 30 and x86 architecture (any API and any architecture is fine). However, we need an image without Google Play Store preinstalled as we need a writable /system folder to inject mitmproxy’s certificate later. That’s okay, because we’ll install the Play Store manually.

echo no | ./Android/Sdk/tools/bin/avdmanager create avd -n Pixel_5_API_30 --abi google_apis/x86 --package 'system-images;android-30;google_apis;x86'

Start the virtual device with the additional -writable-system flag which permits us to make /system writable. I also have to unset QT_QPA_PLATFORM= because I’m on wayland and the emulator doesn’t support it.

QT_QPA_PLATFORM= ./Android/Sdk/emulator/emulator @Pixel_5_API_30 -writable-system

Now let’s download the OpenGAPPs that match our API and architecture. Select the pico variant because we don’t need anything else, just the Play Store.

curl -OL ''

We’ve to decompress it in order to get and push Phonesky.apk to the virtual device. We also need to whitelist its permissions (thank you to the MinMicroG guys).

lzip -d Core/vending-x86.tar.lz
tar xf vending-x86.tar
adb root
adb shell avbctl disable-verification # adb disable-verity makes the emulator crash
adb reboot
adb wait-for-device
adb root
adb remount
adb push vending-x86/nodpi/priv-app/Phonesky/Phonesky.apk /system/priv-app/
curl -O
adb push /system/etc/permissions/

Now, create a dedicated user to run mitmproxy as it’s written in the documentation:

sudo useradd --create-home mitmproxyuser
sudo iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner mitmproxyuser --dport 80 -j REDIRECT --to-port 8080
sudo iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner mitmproxyuser --dport 443 -j REDIRECT --to-port 8080
sudo -u mitmproxyuser -H bash -c 'mitmproxy --mode transparent --showhost --set block_global=false'

Mandatory copy’n’paste from the mitmproxy documentation page: > Note, as soon as you add the iptables rules, you won’t be able to perform successful network calls until you start mitmproxy.

At this point we are almost there, we just need another step to add the mitmproxy certificate as it’s written in the documentation page:

hashed_name=`sudo openssl x509 -inform PEM -subject_hash_old -in ~mitmproxyuser/.mitmproxy/mitmproxy-ca-cert.cer | head -1`
sudo adb push ~mitmproxyuser/.mitmproxy/mitmproxy-ca-cert.cer /system/etc/security/cacerts/$hashed_name.0
adb shell chmod 664 /system/etc/security/cacerts/$hashed_name.0
adb reboot

You should now have the Play Store, login with your Google account and install the App you need.

That’s it! Happy sniffing!

Wednesday, 14 September 2022

Using Pushdown Automata to verify Packet Sequences

As a software developer, most of my work day is spent working practically by coding and hacking away. Recently though I stumbled across an interesting problem which required another, more theoretical approach;

An OpenPGP message contains of a sequence of packets. There are signatures, encrypted data packets and their accompanying encrypted session keys, compressed data and literal data, the latter being the packet that in the end contains the plaintext body of the message.

Those packets can be sequential, e.g. a one-pass-signature followed by a literal data packet and then a signature, or nested, where for example an encrypted data packet contains a compressed data packet, in turn containing a literal data packet. A typical OpenPGP message can be visualized as follows:

A typical encrypted, signed OpenPGP message

This particular message consists of a sequence of Public-Key Encrypted Session Keys (PKESKs), followed by a Symmetrically Encrypted Integrity-Protected Data packet (SEIPD), and Modification Detection Code packet (MDC). Decrypting the SEIPD using the session key obtained from any of the PKESKs by providing an OpenPGP secret key yields a new data stream consisting of a OnePassSignature (OPS) followed by a Compressed Data packet and a Signature. Decompressing the Compressed Data packet yields a Literal Data packet which in turn contains the plaintext of the message.

I am pretty confident that PGPainless can currently handle all possible combinations of packets just fine. Basically it simply reads the next packet, processes it in however way the packet needs to be processed and then reads the next packet. That makes it very powerful, but there is a catch! Not possible combinations are valid!

The RFC contains a section describing the syntax of OpenPGP messages using a set of expressions which form a context-free grammar:

11.3.  OpenPGP Messages

An OpenPGP message is a packet or sequence of packets that corresponds to the following grammatical rules (comma  represents sequential composition, and vertical bar separates alternatives):

   OpenPGP Message :- Encrypted Message | Signed Message |
                      Compressed Message | Literal Message.

   Compressed Message :- Compressed Data Packet.

   Literal Message :- Literal Data Packet.

   ESK :- Public-Key Encrypted Session Key Packet |
          Symmetric-Key Encrypted Session Key Packet.

   ESK Sequence :- ESK | ESK Sequence, ESK.

   Encrypted Data :- Symmetrically Encrypted Data Packet |
         Symmetrically Encrypted Integrity Protected Data Packet

   Encrypted Message :- Encrypted Data | ESK Sequence, Encrypted Data.

   One-Pass Signed Message :- One-Pass Signature Packet,
               OpenPGP Message, Corresponding Signature Packet.

   Signed Message :- Signature Packet, OpenPGP Message |
               One-Pass Signed Message.

In addition, decrypting a Symmetrically Encrypted Data packet or a Symmetrically Encrypted Integrity Protected Data packet as well as decompressing a Compressed Data packet must yield a valid OpenPGP Message.

Using this grammar, we can construct OpenPGP messages by starting with the term “OpenPGP Message” and iteratively replacing parts of it according to the rules until only final “Packet” terms are left.

Let’s create the message from the diagram above to illustrate the process:

OpenPGP Message
-> Encrypted Message
-> ESK Sequence, Encrypted Data
-> ESK, ESK Sequence, Encrypted Data
-> ESK, ESK, Encrypted Data
-> SKESK, ESK, Encrypted Data
-> SKESK, SKESK, Encrypted Data
-> SKESK, SKESK, SEIPD(Signed Message)
-> SKESK, SKESK, SEIPD(One-Pass Signed Message)
-> SKESK, SKESK, SEIPD(OPS, OpenPGP Message, Sig)
-> SKESK, SKESK, SEIPD(OPS, Compressed Message, Sig)
-> SKESK, SKESK, SEIPD(OPS, Compressed Packet(OpenPGP Message), Sig)
-> SKESK, SKESK, SEIPD(OPS, Compressed Packet(Literal Message), Sig)
-> SKESK, SKESK, SEIPD(OPS, Compressed Packet(Literal Packet("Hello, World!")), Sig)

Here, cursive text marks the term that gets replaced in the next step. Bold text marks final OpenPGP packets which are not being replaced any further. Text inside of braces symbolizes nested data, which is the content of the packet before the braces. Some packet terms were abbreviated to make the text fit into individual lines.

Now, applying the rules in the “forwards direction” as we just did is rather simple and if no non-final terms are left, we end up with a valid OpenPGP message. There are infinitely many solutions for “valid” OpenPGP messages. A brief selection:

Literal Packet("Hello, World")
OPS, Literal Packet("Hey!"), Sig
OPS, OPS, Literal Packet("Hoh!"), Sig, Sig
SEIPD(Literal Packet("Wow"))
Sig, Compressed Packet(Literal Packet("Yay"))

On the other hand, some examples for invalid OpenPGP packet streams:

Literal Packet(_), Literal Packet(_)
OPS, Compressed Packet(<empty>), Sig
Literal Packet(_), Sig
OPS, Literal Packet(_)
SKESK, SEIPD(Literal Packet(_)), Literal Packet(_)

Give it a try, I can guarantee you, you cannot create these messages using the OpenPGP grammar when starting with the term OpenPGP Message.

So now the problem becomes: How can we check, whether a given OpenPGP packet stream forms a valid OpenPGP message according to the grammar? Surely just trying to reverse engineer the message structure manually by brute force would be a daunting task… Luckily, theoretical computer science has a solution for us: Pushdown Automata!

Note: The following description of a PDA is kept brief and simplistic. I may well have made imprecise simplifications on the formal definition. If you want to learn more, but care about correctness, or if you are reading this post in preparation for an exam, you really should check out the Wikipedia page linked above instead.
Me, perhaps not totally accurate on the internet

A Pushdown Automaton (PDA) consists of a set of states with transition rules between those states, as well as a stack. It further has an initial state, as well as a set of accepting states. Each time the automaton reads an input symbol from a word (in our case a packet from a packet stream), it pops the top most item from the stack and then checks, if there is a transition from its current state using the given input and stack item to another state. If there is, it transitions into that state, possibly pushing a new item onto the stack, according to the transition rule. If all input symbols have been read, the word (packet stream) is valid if and only if current state is an accepting state, the stack of the automaton is empty and there are no more input symbols left. If the current state is not accepting or if the stack of the automaton is not empty, the word is invalid.

Formally defined, a transition rule is a tuple (state, input-symbol, stack-symbol, state, stack-symbol), where the first state is the origin, the first stack-symbol is what needs to be popped from the stack, the second state is the destination and the second stack-symbol what we push back onto the stack.

There is a special symbol ‘ε’ which means “nothing”. If the input symbol is ε, it means we can apply the rule without reading any input. If a stack symbol is nothing it means we can apply the rule without popping or pushing the stack.

I translated the OpenPGP grammar into a PDA. There is an initial state “start”, a single accepting state “Valid” and a set of transition rules which I annotated with arrows in the following diagram. Let’s take a closer look.

Let’s say we want to validate an OpenPGP message of the format OPS, Literal Packet(_), Sig.

We start with the “start” state to the left. As you can see, the only rule we can apply now is labelled ε,ε/m#, which means we do not read any input and do not pop the stack, but we push ‘#’ and ‘m’ onto the stack. After we applied the rule, we are in the state labelled “OpenPGP Message” and the top of our stack contains the symbol ‘m’.

To advance from here, we can peek at our stack and at the input to check our options. Since our message begins with a One-Pass-Signature (OPS), the only rule we can now apply is OPS,m/o, which requires that the top stack symbol is ‘m’. We read “OPS”, pop ‘m’ from the stack, push ‘o’ onto it and transition into the state “One-Pass-Signed Message”. From here there is only one rule ε,ε/m, which means we simply push ‘m’ onto the stack without popping an item and without reading input. That leaves us back in the state “OpenPGP Message”. Our stack now contains (top to bottom) ‘mo#’.

Now we read the next input symbol “Literal Packet” and apply the rule Literal Packet,m/ε, which means we pop ‘m’ from the stack and transition into state “Literal Message”. The stack now contains ‘o#’.

Next, we read “Sig” from the input, pop ‘o’ from the stack without pushing back an item, transitioning to state “Corresponding Signature”.

Last but not least, we apply rule ε,#/ε, do not read from the input, but pop the top symbol ‘#’ from the stack, transitioning to state “Valid”.

In the end, our stack is empty, we read all of the input data and ended up in an accepting state; The packet sequence “OPS, Literal Packet, Sig” forms a valid OpenPGP message.

Would we start over, but with an invalid input like “Literal Packet, Sig”, the play would go like this:

First, we transition from “start” to “OpenPGP Message” by pushing ‘#’ and ‘m’ to the stack. Then we apply rule Literal Packet,m/ε, read “Literal Packet” from the input, pop ‘m’ from the stack without pushing anything back onto it. This brings us into state “Literal Message” with ‘#’ being our new stack symbol. From here, we only have two rules that we could apply: Sig,o/ε would require us to read “Sig” from the input and have ‘o’ on the top of our stack. Both of these requirements we cannot fulfill. The other option ε,#/ε requires us to pop ‘#’ from the stack without reading input. It even brings us into a state called “Valid”! Okay, let’s do that then!

So far we have read “Literal Packet” from the packet stream. Would the data stream end here, we would have a valid OpenPGP message. Unfortunately, there is still some input left. However, there are no valid rules which allow us to transition any further with input “Sig”. Therefore, the input “Literal Packet, Sig” is not a valid OpenPGP message.

You can try any of the invalid messages listed above and you will see that you will always end up in a situation where you either have not fully read all the input symbols, your stack is not empty, or you end up in a non-accepting state.

You might notice some transitions are drawn using dashed lines. Those illustrate transitions for validating the content of nested packets. For example the Compressed Packet MUST contain another valid OpenPGP message. Depending on how you implement the PDA in software, you can either replace the input stream and use the nested transitions which require you to jump back from a nested “Valid” to the state where the nested transition originated from, or use child PDAs as described below.

The packet sequence Compressed Packet(Literal Packet(_)) for example would be resolved like follows:

From “start” we transition to “OpenPGP Message” by pushing ‘#’ and ‘m’ on the stack. The we read “Compressed Packet from input, pop ‘m’ from the stack and transition to state “Compressed Message”. Since the “Literal Packet” is part of the Compressed Packet”s contents, we now create a new child PDA with input stream “Literal Packet”. After initializing this PDA by pushing ‘#’ and ‘m’ to the stack, we then transition from “OpenPGP Message” to “Literal Message” by reading “Literal Packet” and popping ‘m’, after which we transition to “Valid” by popping ‘#’. Now this PDA is ended up in a valid state, so our parent PDA can transition from “Compressed Message” by reading nothing from the input (remember, the “Compressed Packet” was the only packet in this PDAs stream), popping ‘#’, leaving us with an empty stack and empty input in the valid state.

In PGPainless’ code I am planning to implement OpenPGP message validation by using InputStreams with individual PDAs. If a packet contains nested data (such as the Compressed or Encrypted Packet), a new InputStream will be opened on the decompressed/decrypted data. This new InputStream will in turn have a its own PDA to ensure that the content of the packet forms a valid OpenPGP message on its own. The parent stream on the other hand must check, whether the PDA of it’s child stream ended up in a valid state before accepting its own packet stream.

Initial tests already show promising results, so stay tuned for another blog post where I might go into more details on the implementation 🙂

I really enjoyed this journey into the realm of theoretical computer science. It was a welcome change to my normal day-to-day programming.

Happy Hacking!

Thursday, 08 September 2022

Ada & Zangemann ready to pre-order in English

My English book Ada & Zangemann - A Tale of Software, Skateboards, and Raspberry Ice Cream can be pre-ordered at No Starch Press with the coupon code "Hacking4Freedom".

This still feels a bit like a wonderful dream to me and I am so thankful it all worked out. I hope you will enjoy reading the book yourself and to others, that you share it with children around you to inspire their interest in tinkering with software and hardware and encourage them to shape technology themselves. I also hope, that it serves as a useful gift for your family, friends, and co-workers because it explores topics you care about, and that you also appreciate that it is published under Creative Commons By Share Alike.

16:9 version of the book cover

“A rousing tale of self-reliance, community, and standing up to bullies…software freedom is human freedom!” —Cory Doctorow, Sci-Fi Author

“Introduces readers young and old to the power and peril of software. Behind it all is a backdrop of ethics of knowledge sharing upon which the arc of human history rides.” —Vint Cerf, Computer Scientist and One of the Inventors of the Internet

“In this hopeful story Ada and her friends join a movement that started back in 1983. Their courageous adventure of software freedom and learning how technology works is a wonderful way to introduce young people everywhere to the joys of tinkering!” —Zoë Kooyman, Executive Director, Free Software Foundation

“Even as a non-child, I was captivated by the story from the first page to the last. Kudos to the author for packaging difficult topics such as monopolies, lobbyism, digital divide, software freedom, digital autonomy, IoT, consumer control, e-waste and much more in a child-friendly form in an easily understandable and exciting storyline.” —Jörg Luther, chief editor of the German Linux-Magazin, LinuxUser, and Raspberry Pi Geek

If you are from the US you can pre-order the hardcover from No Starch Press, get 25% off with the coupon code, receive the DRM free ebook now, and get the book sent from the US in starting in December.

If you live outside the US there are other options. According to the current plan, by the end of 2022, you should be able to pre-order the book from local bookstores worldwide, so the book does not have to be shipped from the US. Those bookstores should then be able to ship the book from ~May 2023 onwards.

Such a book would not be possible without the help of many people and without space restrictions, I can be a bit more verbose in thanking them (and I am pretty sure I unfortunately still forgot some: if so, please let me know):

Many thanks to Reinhard Wiesemann from the Linuxhotel for the financial support and motivation to no longer plan but to make. Thanks to Sandra Brandstätter, who has brought me joy with every new design for the illustrations. Thanks to my editor Wiebke Helmchen, who made developing and sharpening the story fun, and to No Starch Press, and my German publisher d.punkt / O'Reilly guiding me through the whole process of publishing a book and for agreeing to publish the book under a free culture license.

Thanks to Bea and Benni, Christine and Marc, Bernhard Reiter, Isabel Drost-Fromm and Amelia, Katta, Kristina, Martin, Mona and Arne, Nina, Oliver Diedrich, Reinhard Müller, Sabine, and Torsten Grote for great ideas, inspiration, and practical tips.

Thanks to my collegues and volunteers from the FSFE, and the team at d.punkt / O'Reilly who helped a lot to promote the book to a larger audiences.

For the English version a special thanks goes to Cory Doctorow, Brian Osborn, and Vint Cerf for helping me find the right publisher, to John Sullivan for his valuable feedback for language improvements, to Luca Bonessi for his great help with maintaining the git repository for the book, Catharina Maracke and Till Jaeger for their help with Creative Commons licensing with the German version and the many hours in which Pamela Chestek helped No Starch Press and myself to optimise the contract for CC-BY-SA.

Thanks to Bill Pollock and the team at No Starch Press for working on all those many details included in book publishing, from editing, production, marketing, sales, and all the other nitty-gritty details that are often not recognized by readers, but that make a huge difference.

Thanks to the many people of the Free Software movement in general and all the volunteers and staffers of the FSFE, from whom I was allowed to learn and whose engagement motivated me so that this book can inspire children’s interest in tinkering and encourage them to shape technology.

Finally, thanks to my family, who made me write early in the morning, late in the evening, and on holiday, and who always supports my work for software freedom. Especially, I would like to thank my children, who inspired me with new ideas at each reading. Without them, this book would not exist.

For additional information about the book, please visit the FSFE's website about it. All royalties of the book will go to the charitable FSFE and supports the FSFE's work for software freedom.

Tuesday, 06 September 2022

FSFE information desk on Veganmania Danube Island 2022

It was the usual information stall like described several times before in this blog. Unfortunately I didn’t have time yet to write more about it. I created an updated information leaflet and really should get a tent because this time we had heavy rain twice and it was very hard to protect the paper materials with only an umbrella as cover.

I will write more as soon as I find time to do so.

Thursday, 01 September 2022

Creating a Web-of-Trust Implementation: Accessing Certificate Stores

Currently, I am working on a Web-of-Trust implementation for the OpenPGP library PGPainless. This work is being funded by the awesome NLnet foundation through NGI Assure. Check them out! NGI Assure is made possible with financial support from the European Commission’s Next Generation Internet programme.

NGI Assure

In this post, I will outline some progress I made towards a full WoT implementation. The current milestone entails integrating certificate stores more closely with the core API.

On most systems, OpenPGP certificates (public keys) are typically stored and managed by GnuPGs internal key store. The downside of this approach is, that applications that want to make use of OpenPGP either need to depend on GnuPG, or are required to manage their very own exclusive certificate store. The latter approach, which e.g. Thunderbird is taking, leads to a situation where there are multiple certificate stores with different contents. Your GnuPG certificate store might contain Bobs certificate, while the Thunderbird store does not. This is confusing for users, as they now have to manage two places with OpenPGP certificates.

There is a proposal for a Shared PGP Certificate Directory nicknamed “pgp.cert.d” which aims to solve this issue by specifying a shared, maildir-like directory for OpenPGP certificates. This directory serves as a single source for all OpenPGP certificates a user might have to deal with. Being well-defined through the standards draft means different applications can access the certificate store without being locked into a single OpenPGP backend.

Since the Web-of-Trust also requires a certificate store of some kind to work on, I thought that pgp.cert.d might be the ideal candidate to implement. During the past months I reworked my existing implementation to allow for different storage backends and defined an abstraction layer for generalized certificate stores (not only pgp.cert.d). This abstraction layer was integrated with PGPainless to allow encryption and verification operations to request certificates from a store. Let me introduce the different components in more detail:

The library pgp-cert-d-java contains an implementation of the pgp.cert.d specification. It provides an API for applications to store and fetch certificates to and from the pgp.cert.d directory. The task of parsing the certificate material was delegated to the consumer application, so the library is independent from OpenPGP backends.

The library pgp-certificate-store defines an abstraction layer above pgp-cert-d-java. It contains interfaces for a general OpenPGP certificate store. Implementations of this interface could for example access GnuPGs certificate store, since the interface does not make assumptions about how the certificates are stored. Inside pgp-cert-d-java, there is an adapter class that adapts the PGPCertificateDirectory class to the PGPCertificateStore interface.

The pgpainless-cert-d module provides certificate parsing functionality using pgpainless-core. It further provides a factory class to instantiate PGPainless-backed instances of the PGPCertificateDirectory interface (both file-based, as well as in-memory directories).

Lastly, the pgpainless-cert-d-cli application is a command line tool to demonstrate the pgp.cert.d functionality. It can be used to manage the certificate directory by inserting and fetching certificates:

$ pgpainless-cert-d-cli help
Store and manage public OpenPGP certificates
Usage: certificate-store [-s=DIRECTORY] [COMMAND]

  -s, --store=DIRECTORY   Overwrite the default certificate directory path

  help    Display the help text for a subcommand
  export  Export all certificates in the store to Standard Output
  insert  Insert or update a certificate
  import  Import certificates into the store from Standard Input
  get     Retrieve certificates from the store
  setup   Setup a new certificate directory
  list    List all certificates in the directory
  find    Lookup primary certificate fingerprints by subkey ids or fingerprints
Powered by picocli

Now let’s see how the certificate store can integrate with PGPainless:

Firstly, let’s set up a pgp.cert.d using pgpainless-cert-d-cli:

$ pgpainless-cert-d-cli setup

This command initializes the certificate directory in .local/share/pgp.cert.d/ and creates a trust-root key with the displayed fingerprint. This trust-root currently is not of use, but eventually we will use it as the root of trust in the Web-of-Trust.

Just for fun, let’s import our OpenPGP certificates from GnuPG into the pgp.cert.d:

$ gpg --export --armor | pgpainless-cert-d-cli import

The first part of the command exports all public keys from GnuPG, while the second part imports them into the pgp.cert.d directory.

We can now access those certificates like this:

$ pgpainless-cert-d-cli get -a 7F9116FEA90A5983936C7CFAA027DB2F3E1E118A
Version: PGPainless
Comment: 7F91 16FE A90A 5983 936C  7CFA A027 DB2F 3E1E 118A
Comment: Paul Schaub <>
Comment: 2 further identities


Would this certificate change over time, e.g. because someone signs it and sends me an updated copy, I could merge the new signatures into the store by simply inserting the updated certificate again:

pgpainless-cert-d-cli insert < update.asc

Now, I said earlier that the benefit of the pgp.cert.d draft was that applications could access the certificate store without the need to rely on a certain backend. Let me demonstrate this by showing how to access my certificate within a Java application without the need to use pgpainless-cert-d-cli.

First, let’s write a small piece of code which encrypts a message to my certificate:

// Setup the store
SubkeyLookupFactory lookupFactory = new DatabaseSubkeyLookupFactory();
PGPCertificateDirectory pgpCertD = PGPainlessCertD.fileBased(lookupFactory);
PGPCertificateStoreAdapter store = new PGPCertificateStoreAdapter(pgpCertD);

OpenPgpFingerprint myCert = OpenPgpFingerprint.parse("7F9116FEA90A5983936C7CFAA027DB2F3E1E118A");

ByteArrayInputStream plaintext = new ByteArrayInputStream("Hello, World! This message is encrypted using a cert from a store!".getBytes());
ByteArrayOutputStream ciphertextOut = new ByteArrayOutputStream();

// Encrypt
EncryptionStream encryptionStream = PGPainless.encryptAndOrSign()
      .addRecipient(adapter, myCert)));
Streams.pipeAll(plaintext, encryptionStream);


In this example, we first set up access to the shared certificate directory. For that we need a method to look up certificates by subkey-ids. In this case this is done through an SQLite database. Next, we instantiate a PGPCertificateDirectory object, which we then wrap in a PGPCertificateStoreAdapter to make it usable within PGPainless.

Next, we only need to know our certificates fingerprint in order to instruct PGPainless to encrypt a message to it. Lastly, we print out the encrypted message.

In the future, once the Web-of-Trust is implemented, it should be possible to pass in the recipients email address instead of the fingerprint. The WoT would then find trustworthy keys with that email address and select those for encryption. Right now though, the user still has to identify trustworthy keys of recipients themselves still.

Similarly, we can use a certificate store when verifying a signed message:

ByteArrayInputStream ciphertextIn = ...; // signed message
ByteArrayOutputStream plaintextOut = new ByteArrayOutputStream();

// Verify
DecryptionStream verificationStream = PGPainless.decryptAndOrVerify()
  .withOptions(new ConsumerOptions()
Streams.pipeAll(verificationStream, plaintextOut);

OpenPgpMetadata result = decryptionStream.getResult();
assertTrue(result.isVerified()); // signature is correct and valid

Here, PGPainless will process the signed message, identify the key that was used for signing and fetch its certificate from the certificate store.

Note, that if you implement verification like that, it is up to you to verify the trustworthiness of the certificate yourself.

In the future, this task will be done by a WoT library in the PGPainless ecosystem automatically though 🙂

The current state of the certificate store integration into PGPainless can be found on the storeIntegration branch.

Wednesday, 31 August 2022

Creating a kubernetes cluster with kubeadm on Ubuntu 22.04 LTS

In this blog post, I’ll try to share my personal notes on how to setup a kubernetes cluster with kubeadm on ubuntu 22.04 LTS Virtual Machines.

I am going to use three (3) Virtual Machines in my local lab. My home lab is based on libvirt Qemu/KVM (Kernel-based Virtual Machine) and I run Terraform as the infrastructure provision tool.

There is a copy of this blog post to github.

If you notice something wrong you can either contact me via the contact page, or open a PR in the github project.

you can also follow me at twitter:

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.


  • at least 3 Virtual Machines of Ubuntu 22.04 (one for control-plane, two for worker nodes)
  • 2GB (or more) of RAM on each Virtual Machine
  • 2 CPUs (or more) on each Virtual Machine
  • 20Gb of hard disk on each Virtual Machine
  • No SWAP partition/image/file on each Virtual Machine

Git Terraform Code for the kubernetes cluster

I prefer to have a reproducible infrastructure, so I can very fast create and destroy my test lab. My preferable way of doing things is testing on each step, so I pretty much destroy everything, coping and pasting commands and keep on. I use terraform for the create the infrastructure. You can find the code for the entire kubernetes cluster here: k8s cluster - Terraform code.

If you do not use terraform, skip this step!

You can git clone the repo to review and edit it according to your needs.

git clone
cd tf_libvirt

You will need to make appropriate changes. Open for that. The most important option to change, is the User option. Change it to your github username and it will download and setup the VMs with your public key, instead of mine!

But pretty much, everything else should work out of the box. Change the vmem and vcpu settings to your needs.

Init terraform before running the below shell script.

terraform init

and then run


output should be something like:

Apply complete! Resources: 16 added, 0 changed, 0 destroyed.


VMs = [
  "  k8scpnode",
  "   k8wrknode1",
  "    k8wrknode2",

Verify that you have ssh access to the VMs


ssh  -l ubuntu

replace the IP with what the output gave you.

Ubuntu 22.04 Image

If you noticed in the terraform code, I have the below declaration as the cloud image:


that means, I’ve already downloaded it, in the upper directory to speed things up!

cd ../
curl -sLO
cd -

Control-Plane Node

Let’s us now start the configure of the k8s control-plane node.

Ports on the control-plane node

Kubernetes runs a few services that needs to be accessable from the worker nodes.

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self

Although etcd ports are included in control plane section, you can also host your
own etcd cluster externally or on custom ports.

Firewall on the control-plane node

We need to open the necessary ports on the CP’s (control-plane node) firewall.

sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10259/tcp
sudo ufw allow 10257/tcp

#sudo ufw disable
sudo ufw status

the output should be

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
6443/tcp                   ALLOW       Anywhere
2379:2380/tcp              ALLOW       Anywhere
10250/tcp                  ALLOW       Anywhere
10259/tcp                  ALLOW       Anywhere
10257/tcp                  ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)
6443/tcp (v6)              ALLOW       Anywhere (v6)
2379:2380/tcp (v6)         ALLOW       Anywhere (v6)
10250/tcp (v6)             ALLOW       Anywhere (v6)
10259/tcp (v6)             ALLOW       Anywhere (v6)
10257/tcp (v6)             ALLOW       Anywhere (v6)

Hosts file in the control-plane node

We need to update the /etc/hosts with the internal IP and hostname.
This will help when it is time to join the worker nodes.

echo $(hostname -I) $(hostname) | sudo tee -a /etc/hosts

Just a reminder: we need to update the hosts file to all the VMs.
To include all the VMs’ IPs and hostnames.

If you already know them, then your /etc/hosts file should look like this:  k8scpnode   k8wrknode1    k8wrknode2

replace the IPs to yours.

No Swap on the control-plane node

Be sure that SWAP is disabled in all virtual machines!

sudo swapoff -a

and the fstab file should not have any swap entry.

The below command should return nothing.

sudo grep -i swap /etc/fstab

If not, edit the /etc/fstab and remove the swap entry.

If you follow my terraform k8s code example from the above github repo,
you will notice that there isn’t any swap entry in the cloud init (user-data) file.

Nevertheless it is always a good thing to douple check.

Kernel modules on the control-plane node

We need to load the below kernel modules on all k8s nodes, so k8s can create some network magic!

  • overlay
  • br_netfilter

Run the below bash snippet that will do that, and also will enable the forwarding features of the network.

sudo tee /etc/modules-load.d/kubernetes.conf <<EOF

sudo modprobe overlay
sudo modprobe br_netfilter

sudo lsmod | grep netfilter

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

sudo sysctl --system

NeedRestart on the control-plane node

Before installing any software, we need to make a tiny change to needrestart program. This will help with the automation of installing packages and will stop asking -via dialog- if we would like to restart the services!

echo "\$nrconf{restart} = 'a';" | sudo tee -a /etc/needrestart/needrestart.conf

Installing a Container Runtime on the control-plane node

It is time to choose which container runtime we are going to use on our k8s cluster. There are a few container runtimes for k8s and in the past docker were used to. Nowadays the most common runtime is the containerd that can also uses the cgroup v2 kernel features. There is also a docker-engine runtime via CRI. Read here for more details on the subject.

curl -sL | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker-keyring.gpg

sudo apt-add-repository -y "deb jammy stable"

sleep 5

sudo apt -y install

containerd config default                              \
 | sed 's/SystemdCgroup = false/SystemdCgroup = true/' \
 | sudo tee /etc/containerd/config.toml

sudo systemctl restart containerd.service

We have also enabled the

systemd cgroup driver

so the control-plane node can use the cgroup v2 features.

Installing kubeadm, kubelet and kubectl on the control-plane node

Install the kubernetes packages (kubedam, kubelet and kubectl) by first adding the k8s repository on our virtual machine. To speed up the next step, we will also download the configuration container images.

sudo curl -sLo /etc/apt/trusted.gpg.d/kubernetes-keyring.gpg

sudo apt-add-repository -y "deb kubernetes-xenial main"

sleep 5

sudo apt install -y kubelet kubeadm kubectl

sudo kubeadm config images pull

Initializing the control-plane node

We can now initialize our control-plane node for our kubernetes cluster.

There are a few things we need to be careful about:

  • We can specify the control-plane-endpoint if we are planning to have a high available k8s cluster. (we will skip this for now),
  • Choose a Pod network add-on (next section) but be aware that CoreDNS (DNS and Service Discovery) will not run till then (later),
  • define where is our container runtime socket (we will skip it)
  • advertise the API server (we will skip it)

But we will define our Pod Network CIDR to the default value of the Pod network add-on so everything will go smoothly later on.

sudo kubeadm init --pod-network-cidr=

Keep the output in a notepad.

Create user access config to the k8s control-plane node

Our k8s control-plane node is running, so we need to have credentials to access it.

The kubectl reads a configuration file (that has the token), so we copying this from k8s admin.

rm -rf $HOME/.kube

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

ls -la $HOME/.kube/config

alias k="kubectl"

Verify the control-plane node

Verify that the kubernets is running.

That means we have a k8s cluster - but only the control-plane node is running.

kubectl cluster-info
#kubectl cluster-info dump

k get nodes -o wide; k get pods  -A -o wide

Install an overlay network provider on the control-plane node

As I mentioned above, in order to use the DNS and Service Discovery services in the kubernetes (CoreDNS) we need to install a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other.

We will use flannel as the simplest of them.

k apply -f

Verify CoreDNS is running on the control-plane node

Verify that the control-plane node is Up & Running and the control-plane pods (as coredns pods) are also running

$ k get nodes -o wide

k8scpnode   Ready    control-plane   54s   v1.25.0   <none>        Ubuntu 22.04.1 LTS   5.15.0-46-generic   containerd://1.6.8
$ k get pods -A -o wide

kube-flannel kube-flannel-ds-zqv2b             1/1   Running 0        36s k8scpnode <none>         <none>
kube-system  coredns-565d847f94-lg54q          1/1   Running 0        38s      k8scpnode <none>         <none>
kube-system  coredns-565d847f94-ms8zk          1/1   Running 0        38s      k8scpnode <none>         <none>
kube-system  etcd-k8scpnode                    1/1   Running 0        50s k8scpnode <none>         <none>
kube-system  kube-apiserver-k8scpnode          1/1   Running 0        50s k8scpnode <none>         <none>
kube-system  kube-controller-manager-k8scpnode 1/1   Running 0        50s k8scpnode <none>         <none>
kube-system  kube-proxy-pv7tj                  1/1   Running 0        39s k8scpnode <none>         <none>
kube-system  kube-scheduler-k8scpnode          1/1   Running 0        50s k8scpnode <none>         <none>

That’s it with the control-plane node !

Worker Nodes

The below instructions works pretty much the same on both worker nodes.

I will document the steps for the worker1 node but do the same for the worker2 node.

Ports on the worker nodes

As we learned above on the control-plane section, kubernetes runs a few services

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All

Firewall on the worker nodes

so we need to open the necessary ports on the worker nodes too.

sudo ufw allow 10250/tcp
sudo ufw allow 30000:32767/tcp

sudo ufw status

output should look like

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
10250/tcp                  ALLOW       Anywhere
30000:32767/tcp            ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)
10250/tcp (v6)             ALLOW       Anywhere (v6)
30000:32767/tcp (v6)       ALLOW       Anywhere (v6)

The next few steps are pretty much exactly the same as in the control-plane node.
In order to keep this documentation short, I’ll just copy/paste the commands.

Hosts file in the worker node

Update the /etc/hosts file to include the IPs and hostname of all VMs.  k8scpnode   k8wrknode1    k8wrknode2

No Swap on the worker node

sudo swapoff -a

Kernel modules on the worker node

sudo tee /etc/modules-load.d/kubernetes.conf <<EOF

sudo modprobe overlay
sudo modprobe br_netfilter

sudo lsmod | grep netfilter

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

sudo sysctl --system

NeedRestart on the worker node

echo "\$nrconf{restart} = 'a';" | sudo tee -a /etc/needrestart/needrestart.conf

Installing a Container Runtime on the worker node

curl -sL | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker-keyring.gpg

sudo apt-add-repository -y "deb jammy stable"

sleep 5

sudo apt -y install

containerd config default                              \
 | sed 's/SystemdCgroup = false/SystemdCgroup = true/' \
 | sudo tee /etc/containerd/config.toml

sudo systemctl restart containerd.service

Installing kubeadm, kubelet and kubectl on the worker node

sudo curl -sLo /etc/apt/trusted.gpg.d/kubernetes-keyring.gpg

sudo apt-add-repository -y "deb kubernetes-xenial main"

sleep 5

sudo apt install -y kubelet kubeadm kubectl

sudo kubeadm config images pull

Get Token from the control-plane node

To join nodes to the kubernetes cluster, we need to have a couple of things.

  1. a token from control-plane node
  2. the CA certificate hash from the contol-plane node.

If you didnt keep the output the initialization of the control-plane node, that’s okay.

Run the below command in the control-plane node.

sudo kubeadm  token list

and we will get the initial token that expires after 24hours.

TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
zt36bp.uht4cziweef1jo1h   23h         2022-08-31T18:38:16Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

In this case is the


Get Certificate Hash from the control-plane node

To get the CA certificate hash from the control-plane-node, we need to run a complicated command:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

and in my k8s cluster is:


Join Workers to the kubernetes cluster

So now, we can Join our worker nodes to the kubernetes cluster.
Run the below command on both worker nodes:

sudo kubeadm join \
       --token zt36bp.uht4cziweef1jo1h \
       --discovery-token-ca-cert-hash sha256:a4833f8c82953370610efaa5ed93b791337232c3a948b710b2435d747889c085

we get this message

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

Is the kubernetes cluster running ?

We can verify that

kubectl get nodes   -o wide
kubectl get pods -A -o wide
k8scpnode    Ready    control-plane   64m     v1.25.0   <none>        Ubuntu 22.04.1 LTS   5.15.0-46-generic   containerd://1.6.8
k8wrknode1   Ready    <none>          2m32s   v1.25.0    <none>        Ubuntu 22.04.1 LTS   5.15.0-46-generic   containerd://1.6.8
k8wrknode2   Ready    <none>          2m28s   v1.25.0     <none>        Ubuntu 22.04.1 LTS   5.15.0-46-generic   containerd://1.6.8
NAMESPACE      NAME                                READY   STATUS    RESTARTS      AGE     IP                NODE         NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-52g92               1/1     Running   0             2m32s    k8wrknode1   <none>           <none>
kube-flannel   kube-flannel-ds-7qlm7               1/1     Running   0             2m28s     k8wrknode2   <none>           <none>
kube-flannel   kube-flannel-ds-zqv2b               1/1     Running   0             64m   k8scpnode    <none>           <none>
kube-system    coredns-565d847f94-lg54q            1/1     Running   0             64m        k8scpnode    <none>           <none>
kube-system    coredns-565d847f94-ms8zk            1/1     Running   0             64m        k8scpnode    <none>           <none>
kube-system    etcd-k8scpnode                      1/1     Running   0             64m   k8scpnode    <none>           <none>
kube-system    kube-apiserver-k8scpnode            1/1     Running   0             64m   k8scpnode    <none>           <none>
kube-system    kube-controller-manager-k8scpnode   1/1     Running   1 (12m ago)   64m   k8scpnode    <none>           <none>
kube-system    kube-proxy-4khw6                    1/1     Running   0             2m32s    k8wrknode1   <none>           <none>
kube-system    kube-proxy-gm27l                    1/1     Running   0             2m28s     k8wrknode2   <none>           <none>
kube-system    kube-proxy-pv7tj                    1/1     Running   0             64m   k8scpnode    <none>           <none>
kube-system    kube-scheduler-k8scpnode            1/1     Running   1 (12m ago)   64m   k8scpnode    <none>           <none>

That’s it !

Our k8s cluster is running.

Kubernetes Dashboard

is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.

We can proceed by installing a k8s dashboard to our k8s cluster.

Install kubernetes dashboard

One simple way to install the kubernetes-dashboard, is by applying the latest (as of this writing) yaml configuration file.

kubectl apply -f

the output of the above command should be something like

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created created created created created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Verify the installation

kubectl get all -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-64bcc67c9c-kvll7   1/1     Running   0          2m16s
pod/kubernetes-dashboard-66c887f759-rr4gn        1/1     Running   0          2m16s

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP    <none>        8000/TCP   2m16s
service/kubernetes-dashboard        ClusterIP   <none>        443/TCP    2m16s

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           2m16s
deployment.apps/kubernetes-dashboard        1/1     1            1           2m16s

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-64bcc67c9c   1         1         1       2m16s
replicaset.apps/kubernetes-dashboard-66c887f759        1         1         1       2m16s

Add a Node Port to kubernetes dashboard

Kubernetes Dashboard by default runs on a internal 10.x.x.x IP.

To access the dashboard we need to have a NodePort in the kubernetes-dashboard service.

We can either Patch the service or edit the yaml file.

Patch kubernetes-dashboard

kubectl --namespace kubernetes-dashboard patch svc kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'


service/kubernetes-dashboard patched

verify the service

kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP    <none>        8000/TCP        11m
kubernetes-dashboard        NodePort   <none>        443:32709/TCP   11m

we can see the 30480 in the kubernetes-dashboard.

Edit kubernetes-dashboard Service

kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard

and chaning the service type from

type: ClusterIP


type: NodePort

Accessing Kubernetes Dashboard

The kubernetes-dashboard has two (2) pods, one (1) for metrics, one (2) for the dashboard.

To access the dashboard, first we need to identify in which Node is running.

kubectl get pods -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-64bcc67c9c-fs7pt   1/1     Running   0          2m43s   k8wrknode1   <none>           <none>
kubernetes-dashboard-66c887f759-pzt4z        1/1     Running   0          2m44s   k8wrknode2   <none>           <none>

In my setup the dashboard pod is running on the worker node 2 and from the /etc/hosts is on the IP.

The NodePort is 32709

k get svc -n kubernetes-dashboard -o wide

So, we can open a new tab on our browser and type:

and accept the self-signed certificate!


Create An Authentication Token (RBAC)

Last step for the kubernetes-dashboard is to create an authentication token.

Creating a Service Account

Create a new yaml file, with kind: ServiceAccount that has access to kubernetes-dashboard namespace and has name: admin-user.

cat > kubernetes-dashboard.ServiceAccount.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard


add this service account to the k8s cluster

kubectl apply -f kubernetes-dashboard.ServiceAccount.yaml


serviceaccount/admin-user created

Creating a ClusterRoleBinding

We need to bind the Service Account with the kubernetes-dashboard via Role-based access control.

cat > kubernetes-dashboard.ClusterRoleBinding.yaml <<EOF
kind: ClusterRoleBinding
  name: admin-user
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard


apply this yaml file

kubectl apply -f kubernetes-dashboard.ClusterRoleBinding.yaml created

That means, our Service Account User has all the necessary roles to access the kubernetes-dashboard.

Getting a Bearer Token

Final step is to create/get a token for our user.

kubectl -n kubernetes-dashboard create token admin-user

Add this token to the previous login page


Browsing Kubernetes Dashboard

eg. Cluster –> Nodes


Nginx App

Before finishing this blog post, I would also like to share how to install a simple nginx-app as it is customary to do such thing in every new k8s cluster.

But plz excuse me, I will not get into much details.
You should be able to understand the below k8s commands.

Install nginx-app

kubectl create deployment nginx-app --image=nginx --replicas=2
deployment.apps/nginx-app created

Get Deployment

kubectl get deployment nginx-app -o wide
nginx-app   2/2     2            2           64s   nginx        nginx    app=nginx-app

Expose Nginx-App

kubectl expose deployment nginx-app --type=NodePort --port=80
service/nginx-app exposed

Verify Service nginx-app

kubectl get svc nginx-app -o wide
nginx-app   NodePort   <none>        80:31761/TCP   27s   app=nginx-app

Describe Service nginx-app

kubectl describe svc nginx-app
Name:                     nginx-app
Namespace:                default
Labels:                   app=nginx-app
Annotations:              <none>
Selector:                 app=nginx-app
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31761/TCP
Endpoints:      ,
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Curl Nginx-App

<!DOCTYPE html>
<title>Welcome to nginx!</title>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Nginx-App from Browser


That’s it !

I hope you enjoyed this blog post.



libvirt_domain.domain-ubuntu["k8wrknode1"]: Destroying... [id=446cae2a-ce14-488f-b8e9-f44839091bce]
libvirt_domain.domain-ubuntu["k8scpnode"]: Destroying... [id=51e12abb-b14b-4ab8-b098-c1ce0b4073e3]
time_sleep.wait_for_cloud_init: Destroying... [id=2022-08-30T18:02:06Z]
libvirt_domain.domain-ubuntu["k8wrknode2"]: Destroying... [id=0767fb62-4600-4bc8-a94a-8e10c222b92e]
time_sleep.wait_for_cloud_init: Destruction complete after 0s
libvirt_domain.domain-ubuntu["k8wrknode1"]: Destruction complete after 1s
libvirt_domain.domain-ubuntu["k8scpnode"]: Destruction complete after 1s
libvirt_domain.domain-ubuntu["k8wrknode2"]: Destruction complete after 1s["k8wrknode1"]: Destroying... [id=/var/lib/libvirt/images/Jpw2Sg_cloud-init.iso;b8ddfa73-a770-46de-ad16-b0a5a08c8550]["k8wrknode2"]: Destroying... [id=/var/lib/libvirt/images/VdUklQ_cloud-init.iso;5511ed7f-a864-4d3f-985a-c4ac07eac233]
libvirt_volume.ubuntu-base["k8scpnode"]: Destroying... [id=/var/lib/libvirt/images/l5Rr1w_ubuntu-base]
libvirt_volume.ubuntu-base["k8wrknode2"]: Destroying... [id=/var/lib/libvirt/images/VdUklQ_ubuntu-base]["k8scpnode"]: Destroying... [id=/var/lib/libvirt/images/l5Rr1w_cloud-init.iso;11ef6bb7-a688-4c15-ae33-10690500705f]
libvirt_volume.ubuntu-base["k8wrknode1"]: Destroying... [id=/var/lib/libvirt/images/Jpw2Sg_ubuntu-base]["k8wrknode1"]: Destruction complete after 1s
libvirt_volume.ubuntu-base["k8wrknode2"]: Destruction complete after 1s["k8scpnode"]: Destruction complete after 1s["k8wrknode2"]: Destruction complete after 1s
libvirt_volume.ubuntu-base["k8wrknode1"]: Destruction complete after 1s
libvirt_volume.ubuntu-base["k8scpnode"]: Destruction complete after 2s
libvirt_volume.ubuntu-vol["k8wrknode1"]: Destroying... [id=/var/lib/libvirt/images/Jpw2Sg_ubuntu-vol]
libvirt_volume.ubuntu-vol["k8scpnode"]: Destroying... [id=/var/lib/libvirt/images/l5Rr1w_ubuntu-vol]
libvirt_volume.ubuntu-vol["k8wrknode2"]: Destroying... [id=/var/lib/libvirt/images/VdUklQ_ubuntu-vol]
libvirt_volume.ubuntu-vol["k8scpnode"]: Destruction complete after 0s
libvirt_volume.ubuntu-vol["k8wrknode2"]: Destruction complete after 0s
libvirt_volume.ubuntu-vol["k8wrknode1"]: Destruction complete after 0s["k8scpnode"]: Destroying... [id=l5Rr1w]["k8wrknode2"]: Destroying... [id=VdUklQ]["k8wrknode1"]: Destroying... [id=Jpw2Sg]["k8wrknode2"]: Destruction complete after 0s["k8scpnode"]: Destruction complete after 0s["k8wrknode1"]: Destruction complete after 0s

Destroy complete! Resources: 16 destroyed.

Wednesday, 24 August 2022

Remove Previous GitLab Pipelines from a project

So you build a GitLab project, you created a pipeline and then a scheduler to run every week your pipeline.

And then you realize that you are polluting the internet with deprecated (garbage) things, at some point you have a debug option on, bla bla bla… etc etc.

It is time to clean up your mess!

Create a GitLab API Token

aka Personal Access Tokens

Select scopes: api.

Verify API token

run something like this

export GITLAB_API="glpat-HldkXzyurwBmroAdQCMo"

curl -sL --header "PRIVATE-TOKEN: ${GITLAB_API}" "" | jq .[].path_with_namespace

you should see your projects.

Get your Project ID

create a new bash variable:

export PROJECT="terraform-provider/terraform-provider-hcloud-ci"

and then use the get rest api call

curl -sL --header "PRIVATE-TOKEN: ${GITLAB_API}" "${PROJECT}&search_namespaces=true" | jq -r .[].id

or you can also put the id into a new bash variable:

export ID=$(curl -sL --header "PRIVATE-TOKEN: ${GITLAB_API}" "${PROJECT}&search_namespaces=true" | jq -r .[].id)

View the previous pipelines

curl -sL \
  --header "PRIVATE-TOKEN: ${GITLAB_API}" \${ID}/pipelines | jq .

Remove deprecated pipelines

just delete them via the API

curl -sL --header "PRIVATE-TOKEN: ${GITLAB_API}"   "${ID}/pipelines?per_page=150"  \
 | jq  -r .[].id    \
 | awk '{print "curl -sL --header \"PRIVATE-TOKEN: ${GITLAB_API}\" --request DELETE${ID}/pipelines/"$1}'   \
 | sh -x

that’s it !

Tag(s): gitlab, api, pipeline

Monday, 15 August 2022

Pessimistic perspectives on technological sustainability

I was recently perusing the Retro Computing Forum when I stumbled across a mention of Collapse OS. If your anxiety levels have not already been maxed out during the last few years of climate breakdown, psychological warfare, pandemic, and actual warmongering, accompanied by supply chain breakdowns, initially in technology and exacerbated by overconsumption and spivcoin, now also in commodities and exacerbated by many of those other factors (particularly the warmongering), then perhaps focusing on societal and civilisational collapse isn’t going to improve your mood or your outlook. Unusually, then, after my last, rather negative post on such topics, may I be the one to introduce some constructive input and perhaps even some slight optimism?

If I understand the motivations behind Collapse OS correctly, it is meant to provide a modest computing environment that can work on well-understood, commonplace, easily repaired and readily sourced hardware, with the software providing the environment itself being maintainable on the target hardware, as opposed to being cross-built on more powerful hardware and then deployed to simpler, less capable hardware. The envisaged scenario for its adoption is a world where powerful new hardware is no longer produced or readily available and where people must scavenge and “make do” with the hardware already produced. Although civilisation may have brought about its own collapse, the consolation is that so much hardware will have been strewn across the planet for a variety of purposes that even after semiconductor fabrication and sophisticated manufacturing have ceased, there will remain a bounty of hardware usable for people’s computational needs (whatever they may be).

I am not one to try and predict the future, and I don’t really want to imagine it as being along the same lines as the plot for one of Kevin Costner’s less successful movies, either, but I feel that Collapse OS and its peers, in considering various dystopian scenarios and strategies to mitigate their impacts, may actually offer more than just a hopefully sufficient kind of preparedness for a depressing future. In that future, without super-fast Internet, dopamine-fired social media, lifelike gaming, and streaming video services with huge catalogues of content available on demand, everyone has to accept that far less technology will be available to them: they get no choice in the matter. Investigating how they might manage is at the very least an interesting thought experiment. But we would be foolish to consider such matters as purely part of a possible future and not instructive in other ways.

An Overlap of Interests

As readers of my previous articles will be aware, I have something of an interest in older computers, open source hardware, and sustainable computing. Older computers lend themselves to analysis and enhancement even by individuals with modest capabilities and tools because they employ technologies that may have been regarded as “miniaturised” when they were new, but they were still amenable to manual assembly and repair. Similarly, open source hardware has grown to a broad phenomenon because the means to make computing systems and accessories has now become more accessible to individuals, as opposed to being the preserve of large and well-resourced businesses. Where these activities experience challenges, it is typically in the areas that have not yet become quite as democratised, such as semiconductor fabrication at the large-scale integration level, along with the development and manufacture of more advanced technology, such as components and devices that would be competitive with off-the-shelf commercial products.

Some of the angst around open source hardware concerns the lack of investment it receives from those who would benefit from it, but much of that investment would largely be concerned with establishing an ability to maintain some kind of parity with modern, proprietary hardware. Ignoring such performance-led requirements and focusing on simpler hardware projects, as many people already do, brings us a lot closer to retrocomputing and a lot closer to the constrained hardware scenario envisaged by Collapse OS. My own experiments with PIC32-based microcontrollers are not too far removed from this, and it would not be inconceivable to run a simple environment in the 64K of RAM and 256K of flash memory of the PIC32MX270, this being much more generous than many microcomputers and games consoles of the 1980s.

Although I relied on cross-compilation to build the programs that would run on the minimal hardware of the PIC32 microcontroller, Collapse OS emphasises self-hosting: that it is possible to build the software within the running software itself. After all, how sustainable would a frugal computing environment be if it needed a much more powerful development system to fix and improve it? For Collapse OS, such self-hosting is enabled by the use of the Forth programming language, as explained by the rationale for switching to Forth from a system implemented in assembly language. Such use of Forth is not particularly unusual: its frugal demands were prized in the microcomputer era and earlier, with its creator Charles Moore describing the characteristics of a computer designed to run Forth as needing around 8K of RAM and 8K of ROM, this providing a complete interactive system.

(If you are interested in self-hosting and bootstrapping, one place to start might be the bootstrapping wiki.)

For a short while, Forth was perhaps even thought to be the hot new thing in some circles within computing. One fairly famous example was the Jupiter Ace microcomputer, developed by former Sinclair Research designers, offering a machine that followed on fairly closely from Sinclair’s rudimentary ZX81. But in a high-minded way one might have expected from the Sinclair stable and the Cambridge scene, it offered Forth as its built-in language in response to all the other microcomputers offering “unstructured” BASIC dialects. Worthy as such goals might have been, the introduction of a machine with outdated hardware specifications condemned it in its target market as a home computer, with it offering primitive black-and-white display output against competitors offering multi-colour graphics, and offering limited amounts of memory as competitors launched with far more fitted as standard. Interestingly, the Z80 processor at the heart of the Ace was the primary target of Collapse OS, and one might wonder if the latter might actually be portable to the former, which would be an interesting project if any hardware collector wants to give it a try!

Other Forth-based computers were delivered such as the Canon Cat: an unusual “information appliance” that might have formed the basis of Apple’s Macintosh had that project not been diverted towards following up on the Apple Lisa. Dedicated Forth processors were even delivered, as anticipated already by Moore back in 1980, reminiscent of the Lisp machine era. However, one hardware-related legacy of Forth is that of the Open Firmware standard where a Forth environment provides an interactive command-line interface to a system’s bootloader. Collapse OS fits in pretty well with that kind of application of Forth. Curiously, someone did contact me when I first wrote about my PIC32 experiments, this person maintaining their own microcontroller Forth implementation, and in the context of this article I have re-established contact because I never managed to properly follow up on the matter.

Changing the Context

According to a broad interpretation of the Collapse OS hardware criteria, the PIC32MX270 would actually not be a bad choice. Like the AVR microcontrollers and the microprocessors of the 1980s, PIC32MX microcontrollers are available in convenient dual in-line packages, but unlike those older microprocessors they also offer the 32-bit MIPS architecture that is nicer to program than the awkward instruction sets of the likes of the Z80 and 6502, no matter how much nostalgia colours people’s preferences. However, instead of focusing on hardware suitability in a resource-constrained future, I want to consider the messages of simplicity and sustainability that underpin the Collapse OS initiative and might be relevant to the way we practise computing today.

When getting a PIC32 microcontroller to produce a video signal, part of the motivation was just to see how straightforward it might be to make a simple “single chip” microcomputer. Like many microcomputers back in the 1980s, it became tempting to consider how it might be used to deliver graphical demonstrations and games, but I also wondered what kind of role such a system might have in today’s world. Similar projects, including the first versions of the Maximite have emphasised such things as well, along with interfacing and educational applications (such as learning BASIC). Indeed, many low-end microcontroller-based computers attempt to recreate and to emphasise the sparse interfaces of 1980s microcomputers as a distraction-free experience for learning and teaching.

Eliminating distractions is a worthy goal, whether those distractions are things that we can conveniently seek out when our attention wanders, such as all our favourite, readily accessible Internet content, or whether they come in the form of the notifications that plague “modern” user interfaces. Another is simply reducing the level of consumption involved in our computational activities: civilisational collapse would certainly impose severe limits on that kind of consumption, but it would seem foolish to acknowledge that and then continue on the same path of ever-increasing consumption that also increasingly fails to deliver significant improvements in the user experience. When desktop applications, mobile “apps”, and Web sites frequently offer sluggish and yet overly-simplistic interfaces that are more infuriating than anything else, it might be wise to audit our progress and reconsider how we do certain things.

Human nature has us constantly exploring the boundaries of what is possible with technology, but some things which captivate people at any given point on the journey of technological progress may turn out to be distracting diversions from the route ultimately taken. In my trawl of microcomputing history over the last couple of years, I was reminded of an absurd but illustrative example of how certain technological exercises seem to become the all-consuming focus of several developers, almost being developed for the sake of it, before the fad in question flames out and everybody moves on. That example concerned “morphing” software, inspired by visual effects from movies such as Terminator 2, but operating on a simpler, less convincing level.

Suddenly, such effects were all over the television and for a few months in late 1993, everyone was supposedly interested in making the likeness of one famous person slowly change into the likeness of another, never mind that it really required a good library of images, this being somewhat before widespread digital imaging and widespread image availability. Fast-forward a few years, and it all seemed like a crazy mass delusion best never spoken of again. We might want to review our own time’s obsessions with performative animations and effects, along with the peculiarities of touch-based interfaces, the assumption of pervasive and fast connectivity, and how these drive hardware consumption and obsolescence.

Once again, some of this comes back to asking how people managed to do things in earlier times and why things sometimes seem so complicated now. Thinking back to the 1980s era of microcomputing, my favourite 8-bit computer of those times was the Acorn Electron, this being the one I had back then, and it was certainly possible to equip it to do word processing to a certain level. Acorn even pitched an expanded version as a messaging terminal for British Telecom, although I personally think that they could have made more of such opportunities, especially given the machine’s 80-column text capabilities being made available at such a low price. The user experience would not exactly be appealing by today’s standards, but then nor would that of Collapse OS, either.

When I got my PIC32 experiment working reasonably, I asked myself if it would be sufficient for tasks like simple messaging and writing articles like this. The answer, assuming that I would enhance that effort to use a USB keyboard and external storage, is probably the same as whether anyone might use a Maximite for such applications: it might not be as comfortable as on a modern system but it would be possible in some way. Given the tricks I used, certain things would actually be regressions from the Electron, such as the display resolution. Conversely, the performance of a 48MHz MIPS-based processor is obviously going to be superior to a 2MHz 6502, even when having to generate the video signal, thus allowing for some potential in other areas.

Reversing Technological Escalation

Using low-specification hardware for various applications today, considering even the PIC32 as low-spec and ignoring the microcomputers of the past, would also need us to pare back the demands that such applications have managed to accumulate over the years. As more powerful, higher-performance hardware has become available, software, specifications and standards have opportunistically grown to take advantage of that extra power, leaving many people bewildered by the result: their new computer being just as slow as their old one, for example.

Standards can be particularly vulnerable where entrenched interests drive hardware consumption whilst seeking to minimise the level of adaptation their own organisations will need to undertake in order to deliver solutions based on such standards. A severely constrained computing device may not have the capacity or performance to handle all the quirks of a “full fat” standard, but it might handle an essential core of that standard, ignoring all the edge cases and special treatment for certain companies’ products. Just as important, the developers of an implementation handling a standard also may not have the capacity or tenacity for a “full fat” standard, but they may do a reasonable job handling one that cuts out all the corporate cruft.

And beyond the technology needed to perform some kind of transaction as part of an activity, we might reconsider what is necessary to actually perform that activity. Here, we may consider the more blatant case of the average “modern” Web site or endpoint, where an activity may end up escalating and involving the performance of a number of transactions, many of which superfluous and, in the case of the pervasive cult of analytics, exploitative. What once may have been a simple Web form is often now an “experience” where the browser connects to dozens of sites, where all the scripts poll the client computer into oblivion, and where the functionality somehow doesn’t manage to work, anyway (as I recently experienced on one airline’s Web site).

Technologists and their employers may drive consumption, but so do their customers. Public institutions, utilities and other companies may lazily rely on easily procured products and services, these insisting (for “security” or “the best experience”) that only the latest devices or devices from named vendors may be used to gain access. Here, the opposite of standardisation occurs, where adherence to brand names dictates the provision of service, compounded by the upgrade treadmill familiar from desktop computing, bringing back memories of Microsoft and Intel ostensibly colluding to get people to replace their computer as often as possible.

A Broader Brush

We don’t need to go back to retrocomputing levels of technology to benefit from re-evaluating the prevalent technological habits of our era. I have previously discussed single-board computers like the MIPS Creator CI20 which, in comparison to contemporary boards from the Raspberry Pi series, was fairly competitive in terms of specification and performance (having twice the RAM of the Raspberry Pi Models A+, B and B+). Although hardly offering conventional desktop performance upon its introduction, the CI20 would have made a reasonable workstation in certain respects in earlier times: its 1GHz CPU and 1GB of RAM should certainly be plenty for many applications even now.

Sadly, starting up and using the main two desktop environments on the CI20 is an exercise in patience, and I recommend trying something like the MATE desktop environment just for something responsive. Using a Web browser like Firefox is a trial, and extensive site blocking is needed just to prevent the browser wanting to download things from all over the place, as it tries to do its bit in shoring up Google’s business model. My father was asking me the other day why a ten-year-old computer might be slow on a “modern” Web site but still perfectly adequate for watching video. I would love to hear the Firefox and Chrome developers, along with the “architects of the modern Web”, give any explanation for this that doesn’t sound like they are members of some kind of self-realisation cult.

If we can envisage a microcomputer, either a vintage one or a modern microcontroller-based one, performing useful computing activities, then we can most certainly envisage machines of ten or so years ago, even ones behind the performance curve, doing so as well. And by realising that, we might understand that we might even have the power to slow down the engineered obsolescence of computing hardware, bring usable hardware back into use, and since not everyone on the planet can afford the latest and greatest, we might even put usable hardware into the hands of more people who might benefit from it.

Naturally, this perspective is rather broader than one that only considers a future of hardship and scarcity, but hardship and scarcity are part of the present, just as they have always been part of the past. Applying many of the same concerns and countermeasures to today’s situation, albeit in less extreme forms, means that we have the power to mitigate today’s situation and, if we are optimistic, perhaps steer it away from becoming the extreme situation that the Collapse OS initiative seeks to prepare for.

Concrete Steps

I have, in the past, been accused of complaining about injustices too generally for my complaints to be taken seriously, never mind such injustices being blatant and increasingly obvious in our modern societies and expressed through the many crises of our times. So how might we seek to mitigate widespread hardware obsolescence and technology-driven overconsumption? Some suggestions in a concise list for those looking for actionable things:

  • Develop, popularise and mandate lightweight formats, protocols and standards
  • Encourage interoperability and tolerance for multiple user interfaces, clients and devices
  • Insist on an unlimited “right to repair” for computing devices including the software
  • Encourage long-term thinking in software and systems development

And now for some elucidation…

Mandatory Accessible Standards

This suggestion has already been described above, but where it would gain its power is in the idea of mandating that public institutions and businesses would be obliged to support lightweight formats, protocols and standards, and not simply as an implementation detail for their chosen “app”, like a REST endpoint might be, but actually as a formal mechanism providing service to those who would interact with those institutions. This would make the use of a broad range of different devices viable, and in the case of special-purpose devices for certain kinds of users, particularly those who would otherwise be handed a smartphone and told to “get with it”, it would offer a humane way of accessing services that is currently denied to them.

For simple dialogue-based interactions, existing formats such as JSON might even be sufficient as they are. I am reminded of a paper I read when putting together my degree thesis back in the 1990s, where the idea was that people would be able to run programs safely in their mail reader, with one example being that of submitting forms.

T-shirt ordering dialogues shown by Safe-Tcl

T-shirt ordering dialogues shown by Safe-Tcl running in a mail program, offering the recipient the chance to order some merchandise that might not be as popular now.

In that paper, most of the emphasis was on the safety of the execution environment as opposed to the way in which the transaction was to be encoded, but it is not implausible that one might have encoded the details of the transaction – the T-shirt size (with the recipient’s physical address presumably already being known to the sender) – in a serialised form of the programming language concerned (Safe-Tcl) as opposed to just dumping some unstructured text in the body of a mail. I would need to dig out my own thesis to see what ideas I had for serialised information. Certainly, such transactions even embellished with other details and choices and with explanatory information, prompts and questions do not require megabytes of HTML, CSS, JavaScript, images, videos and so on.

Interoperability and Device Choice

One thing that the Web was supposed to liberate us from was the insistence that to perform a particular task, we needed a particular application, and that particular application was only available on a particular platform. In the early days, HTML was deliberately simplistic in its display capabilities, and people had to put up with Web pages that looked very plain until things like font tags allowed people to go wild. With different factions stretching HTML in all sorts of directions, CSS was introduced to let people apply presentation attributes to documents, supposedly without polluting or corrupting the original HTML that would remain semantically pure. We all know how this turned out, particularly once the Web 2.0 crowd got going.

Back in the 1990s, I worked on an in-house application at my employer that used a document model inspired by SGML (as HTML had been), and the graphical user interface to the documents being exchanged initially offered a particular user interface paradigm when dealing with collections of data items, this being the one embraced by the Macintosh’s Finder when showing directory hierarchies in what we would now call a tree view. Unfortunately, users seemed to find expanding and hiding things by clicking on small triangles to be annoying, and so alternative presentation approaches were explored. Interestingly, the original paradigm would be familiar even now to those using generic XML editor software, but many people would accept that while such low-level editing capabilities are nice to have, higher-level representations of the data are usually much more preferable.

Such user preferences could quite easily be catered to through the availability of client software that works in the way they expect, rather than the providers of functionality or the operators of services trying to gauge what the latest fashions in user interfaces might be, as we have all seen when familiar Web sites change to mimic something one would expect to see on a smartphone, even with a large monitor on a desk with plenty of pixels to spare. With well-defined standards, if a client device or program were to see that it needed to allow a user to peruse a large collection of items or to choose a calendar date, it would defer to the conventions of that device or platform, giving the user the familiarity they expect.

This would also allow clients and devices with a wide range of capabilities to be used. The Web tried to deliver a reasonable text-only experience for a while, but most sites can hardly be considered usable in a textual browser these days. And although there is an “accessibility story” for the Web, it largely appears to involve retrofitting sites with semantic annotations to help users muddle through the verbose morass encoded in each page. Certainly, the Web of today does do one thing reasonably by mixing up structure and presentation: it can provide a means of specifying and navigating new kinds of data that might be unknown to the client, showing them something more than a text box. A decent way of extending the range of supported data types would be needed in any alternative, but it would need to spare everyone suddenly having scripts running all over the place.

Rights to Repair

The right to repair movement has traditionally been focused on physical repairs to commercial products, making sure that even if the manufacturer has abandoned a product and really wants you to buy something new from them, you can still choose to have the product repaired so that it can keep serving you well for some time to come. But if hardware remains capable enough to keep doing its job, and if we are able to slow down or stop the forces of enforced obsolescence, we also need to make sure that the software running on the hardware may also be repaired, maintained and updated. A right to repair very much applies to software.

Devotees of the cult of the smartphone, those who think that there is an “app” for everything, should really fall silent with shame. Not just for shoehorning every activity they can think of onto a device that is far from suitable for everyone, and not just for mandating commercial relationships with large multinational corporations for everyone, but also for the way that happily functioning smartphones have to be discarded because they run software that is too old and cannot be fixed or upgraded. Demanding the right to exercise the four freedoms of Free Software for our devices means that we get to decide when those devices are “too old” for what we want to use them for. If a device happens to be no longer usable for its original activity even after some Free Software repairs, we can repurpose it for something else, instead of having the vendor use those familiar security scare stories and pretending that they are acting in our best interests.

Long-Term Perspectives

If we are looking to preserve the viability of our computing devices by demanding interoperability to give them a chance to participate in the modern world and by demanding that they may be repaired, we also need to think about how the software we develop may itself remain viable, both in terms of the ability to run the software on available devices as well as the ability to keep maintaining, improving and repairing it. That potentially entails embracing unfashionable practices because “modern” practices do not exactly seem conducive to the kind of sustainable activities we have in mind.

I recently had the opportunity to contemplate the deployment of software in “virtual environments” containing entire software stacks, each of which running their own Web server program, that would receive their traffic from another Web server program running in the same virtual machine, all of this running in some cloud infrastructure. It was either that or using containers containing whole software distributions, these being deployed inside virtual machines containing their own software distributions. All because people like to use the latest and greatest stuff for everything, this stuff being constantly churned by fashionable development methodologies and downloaded needlessly over and over again from centralised Internet services run by monopolists.

Naturally, managing gigabytes of largely duplicated software is worlds, if not galaxies, away from the modest computing demands of things like Collapse OS, but it would be distasteful to anyone even a decade ago and shocking to anyone even a couple of decades ago. Unfashionable as it may seem now, software engineering courses once emphasised things like modularity and the need for formal interfaces between modules in systems. And a crucial benefit of separating out functionality into modules is to allow those modules to mature, do the job they were designed for, and to recede into the background and become something that can be relied upon and not need continual, intensive maintenance. There is almost nothing better than writing a library that one may use constantly but never need to touch again.

Thus, the idea that a precarious stack of precisely versioned software is required to deliver a solution is absurd, but it drives the attitude that established software distributions only deliver “old” software, and it drives the demand for wasteful container or virtual environment solutions whose advocates readily criticise traditional distributions whilst pilfering packages from them. Or as Docker users might all too easily say, “FROM debian:sid”. Part of the problem is that it is easy to rely on methods of mass consumption to solve problems with software – if something is broken, just update and see if it fixes it – but such attitudes permeate the entire development process, leading to continual instability and a software stack constantly in flux.

Dealing with a multitude of software requirements is certainly a challenging problem that established operating systems struggle to resolve convincingly, despite all the shoehorning of features into the Linux technology stack. Nevertheless, the topic of operating system design is rather outside the scope of this article. Closer to relevance is the matter of how people seem reluctant to pick a technology and stick with it, particularly in the realm of programming languages. Then again, I covered much of this before and fairly recently, too. Ultimately, we want to be establishing software stacks that people can readily acquaint themselves with decades down the line, without the modern-day caveats that “feature X changed in version Y” and that if you were not there at the time, you have quite the job to do to catch up with that and everything else that went on, including migrations to a variety of source management tools and venues, maybe even completely new programming languages.

A Different Mindset

If anything, Collapse OS makes us consider a future beyond tomorrow, next week, next year, or a few years’ time. Even if the wheels do not start falling off the vehicle of human civilisation, there are still plenty of other things that can go away without much notice. Corporations like Apple and Google might stick around, whether that is good news or not, but it does not stop them from pulling the plug on products and services. Projects and organisations do not always keep going forever, not least because they are often led by people who cannot keep going forever, either.

There are ways we can mitigate these threats to sustainability and longevity, however. We can share our software through accessible channels without insisting that others use those monopolist-run centralised hosting services. We can document our software so that others have a chance of understanding what we were thinking when we wrote it. We can try and keep the requirements for our software modest and give people a chance to deploy it on modest hardware. And we might think about what kind of world we are leaving behind and whether it is better than the world we were born into.

Come to Barcelona for Akademy 2022!

Akademy is KDE's yearly community event, this year it happens between October 1st and October 7th in my city comarca [one of the reasons of why it's happening is that our Mr. President tricked me into helping organize it [*]]


You don't need to be a "KDE expert" to join, if you're remotely involved or interested in KDE you should really attend if you can (in person if possible, for me online really doesn't work for conferences), and not only the weekend of talks, but the whole week!

I still remember 2007 when our back-then-newbie Mr. President asked me "should I really go to Akademy? All the week? is it really worth it?" and i said "as many days as you can", and I guess we made a good enough impression to convince him to stay around and even want to do all the paper work that involves being in the KDE eV board :D

Anyhow, what i said, come to Akademy 2022! It's free, you'll learn a lot, meet nice people, will be both fun and productive.

And you should too! Register today!




[*] I should really work on my "no" skills, I'm still working on Okular because decades ago i said "how hard can it be" when someone asked to update KPDF to use a newer version of a dependency.

Friday, 12 August 2022

On a road to Prizren with a Free Software Phone

Since people are sometimes slightly surprised that you can go onto a multi week trip with a smartphone running free sofware so only I wanted to share some impressions from my recent trip to Prizren/Kosovo to attend Debconf 22 using a Librem 5. It's a mix of things that happend and bits that got improved to hopefully make things more fun to use. And, yes, there won't be any big surprises like being stranded without the ability to do phone calls in this read because there weren't and there shouldn't be.

After two online versions Debconf 22 (the annual Debian Conference) took place in Prizren / Kosovo this year and I sure wanted to go. Looking for options I settled for a train trip to Vienna, to meet there with friends and continue the trip via bus to Zagreb, then switching to a final 11h direct bus to Prizren.

When preparing for the trip and making sure my Librem 5 phone has all the needed documents I noticed that there will be quite some PDFs to show until I arrive in Kosovo: train ticket, bus ticket, hotel reservation, and so on. While that works by tapping unlocking the phone, opening the file browser, navigating to the folder with the PDFs and showing it via evince this looked like a lot of steps to repeat. Can't we have that information on the Phone Shell's lockscreen?

This was a good opportunity to see if the upcoming plugin infrastructure for the lock screen (initially meant to allow for a plugin to show upcoming events) was flexible enough, so I used some leisure time on the train to poke at this and just before I reached Vienna I was able to use it for the first time. It was the very last check of that ticket, it also was a bit of cheating since I didn't present the ticket on the phone itself but from phosh (the phones graphical shell) running on my laptop but still.

PDF barcode on phosh's lockscreen List of tickets on phosh's lockscreen

This was possible since phosh is written in GTK and so I could just leverage evince's EvView. Unfortunately the hotel check in didn't want to see any documents ☹.

For the next day I moved the code over to the Librem 5 and (being a bit nervous as the queue to get on the bus was quite long) could happily check into the Flixbus by presenting the barcode to the barcode reader via the Librem 5's lockscreen.

When switching to the bus to Prizren I didn't get to use that feature again as we bought the tickets at a counter but we got a nice krem banana after entering the bus - they're not filled with jelly, but krem - a real Kosovo must eat!).

Although it was a rather long trip we had frequent breaks and I'd certainly take the same route again. Here's a photo of Prizren taken on the Librem 5 without any additional postprocessing:


What about seeing the conference schedule on the phone? Confy(a conferences schedule viewer using GTK and libhandy) to the rescue:

Confy with Debconf's schedule

Since Debian's confy maintainer was around too, confy saw a bunch of improvements over the conference.

For getting around Puremaps(an application to display maps and show routing instructions) was very helpful, here geolocating me in Prizren via GPS:


Puremaps currently isn't packaged in Debian but there's work onging to fix that (I used the flatpak for the moment).

We got ourselves sim cards for the local phone network. For some reason mine wouldn't work (other sim cards from the same operator worked in my phone but this one just wouldn't). So we went to the sim card shop and the guy there was perfectly able to operate the Librem 5 without further explanation (including making calls, sending USSD codes to query balance, …). The sim card problem turned out to be a problem on the operator side and after a couple of days they got it working.

We had nice, sunny weather about all the time. That made me switch between high contrast mode (to read things in bright sunlight) and normal mode (e.g. in conference rooms) on the phone quite often. Thankfully we have a ambient light sensor in the phone so we can make that automatic.

Phosh in HighContrast

See here for a video.

Jathan kicked off a DebianOnMobile sprint during the conference where we were able to improve several aspects of mobile support in Debian and on Friday I had the chance to give a talk about the state of Debian on smartphones. pdf-presenter-console is a great tool for this as it can display the current slide together with additional notes. I needed some hacks to make it fit the phone screen but hopefully we figure out a way to have this by default.

Debconf talk Pdf presenter console on a phone

I had two great weeks in Prizren. Many thanks to the organizers of Debconf 22 - I really enjoyed the conference.

Tuesday, 19 July 2022

Creating a Web-of-Trust Implementation: Certify Keys with PGPainless

Currently I am working on a Web-of-Trust implementation for the OpenPGP library PGPainless. This work will be funded by the awesome NLnet foundation through NGI Assure. Check them out! NGI Assure is made possible with financial support from the European Commission’s Next Generation Internet programme.

NGI Assure

Technically, the WoT consists of a graph where the nodes are OpenPGP keys (certificates) with User-IDs and the edges are signatures. I recommend watching this talk by Justus Winter (22:51) to get an overview of what the benefits of the WoT are. In order to be able to create a WoT, users need to be able to sign other users certificates to create those edges.

Therefore, support for signing other certificates and User-IDs was added to PGPainless as a milestone of the project. Since release 1.3.2, users have access to a straight-forward API to create signatures over other users certificates. Let’s explore the new API together!

There are two main categories of signatures which are important for the WoT:

  • Certifications are signatures over User-IDs on certificates. A certification is a statement “I believe that the person holding this certificate goes by the name ‘Alice‘”.
  • Delegations are signatures over certificates which can be used to delegate trust decisions to the holder of the signed certificate.

This is an example for how a user can certify a User-ID:

PGPSecretKeyRing aliceKey = ...;
PGPPublicKeyRing bobsCertificate = ...;

CertificationResult result = PGPainless.certify()
        .userIdOnCertificate("Bob Babbage <>",
        .withKey(aliceKey, protector)

PGPSignature certification = result.getCertification();
// or...
PGPPublicKeyRing certifiedCertificate = result.getCertifiedCertificate();

It is possible to choose between different certification types, depending on the “quality” of the certification. By default, the certification is created as GENERIC_CERTIFICATION, but other levels can be chosen as well.

Furthermore, it is possible to modify the signature with additional signature subpackets, e.g. custom annotations etc.

In order to create a delegation, e.g. in order to delegate trust to an OpenPGP CA, the user can do as follows:

PGPSecretKeyRing aliceKey = ...;
PGPPublicKeyRing caCertificate = ...;

CertificationResult result = PGPainless.certify()
        .withKey(aliceKey, protector)
        .buildWithSubpackets(new CertificationSubpackets.Callback() {
            public void modifyHashedSubpackets(
                    CertificationSubpackets hashedSubpackets) {

PGPSignature delegation = result.getCertification();
// or...
PGPPublicKeyRing delegatedCertificate = result.getCertifiedCertificate();

Here, Alice decided to delegate to the CA as a fully trusted introducer, meaning Alice will trust certificates that were certified by the CA.

Depending on the use-case, it is advisable to scope the delegation, e.g. to a specific domain by adding a Regex packet to the signature, as seen above. As a result, Alice will only trust those certificates introduced by the CA, which have a user-id matching the regex. This is optional however.

Check out PGPainless and don’t forget to check out the awesome NLnet foundation!

Thursday, 14 July 2022

Nonbinary Grammatical Gender and Nonboolean Logic

For many years I have been a hobby linguist and also liked doing math. When learning French and Spanish long time ago, I discovered that Grammatical Gender is binary in these languages. Nouns are classified as female or male, a third neuter gender, as it exists in German does not exist. Adjectives and articles are gendered too. In Spanish and French the World (el mundo/le monde) while in German we say die Welt. German also has neuter as in Das U-Boot (a well known boot loader). Old English was gendering too, but in many cases this has been dropped. Other languages such as Finnish and Esperanto do not have a grammatical gender, or more precisely it is unary in these languages. Only one form exists. In Finnish the Moon is called kuu and in esperanto she is called Luno. Luno is derived from latin Luna, a Luna is the divine embodiment of the Moon. In many langues including Spanish and Russion Luna/луна is female. Not so in German where we say der Mond. In Esperanto Luno sound male, but remember there is no gender in that language. The o at the end just indicates that Luno is a noun.

When I studied computer science I heard of “Aussagenlogik” which has two truth values. Those are True (Die Wahrheit) and False (Der Widerspruch) often represented as bits (binary digits). At that time I had never heard the term Nonbinary, but I had heard of Nonboolean Fuzzy Logic and Quantum Computing. In my head I added a third truth value Unknown (Das Unbekannte) which uses the third neuter gender. When one operand of a binary operator is unknown, the whole result becoms unknown. With Quantum Computing we do not have bits, instead qbits which are superpositions of one and zero. My gender feels the same, it is a superposition of both male and female, so I prefer to call myself genderqueer.

Tuesday, 12 July 2022

KDE Gear 22.08 branches created

Make sure you commit anything you want to end up in the KDE Gear 22.08
releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 14 of July.

More interesting dates
  August 4, 2022: 22.08 RC (22.07.90) Tagging and Release
  August 11, 2022: 22.08 Tagging
  August 18, 2022: 22.08 Release

Sunday, 03 July 2022

The deep learning obesity crisis

Deep learning have made dramatic improvements over the last decades. Part of this is attributed to improved methods that allowed training wider and deeper neural networks. This can also be attributed to better hardware, as well as the development of techniques to use this hardware efficiently. All of this leads to neural networks that grow exponentially in size. But is continuing down this path the best avenue for success?

Deep learning models have gotten bigger and bigger. The figure below shows the accuracy of convolutional neural networks (left) and the size and number of parameters used for the Imagenet competition (right). While the accuracy is increasing and reaching impressive levels, the models get both bigger and use more and more resources. In Schwartz et al., 2020, as a result of rewarding more accuracy than efficiency, it is stated that the amount of compute have increased 300k-fold in 6 years which implies environmental costs as well as increasing the barrier to entry in the field.

Size of deep learning models

Deep learning models get better over time but also increases in size (Canziani et al., 2016).

There may be a correlation between the test loss, the number of model parameters, the amount of compute and the dataset size. The loss gets smaller as the network gets bigger, more data is processed and more compute capacity is added, which suggests a power law is at work, and that the predictions from deep learning models can only become more accurate. Does that mean neural network are bound to getting bigger? Is there an upper limit above which the rate of accuracy change slows down? In that case, changing the paradigm or finding how to get the most out of each parameter would be warranted, so that the accuracy may keep increasing without always throwing more neurons and data at the problem.

Changing the paradigm would require to change perspective and go past deep learning, which would be, giving its tremendous success, a very risky strategy which would almost certainly hamper progress on the short term. As workarounds (which do not address the underlying problem), it may be wise to reduce the models’ size during their training as a way to make them smaller. Three strategies may be employed to that end: dropout, pruning and quantization.


Dropout tries to make sure neurons are diverse enough inside the network, thereby maximizing the usefulness of each of them. To do that, a dropout layer is added between linear connections that randomly deactivates neurons during each forward pass through the neural network. This is done only the training (i.e. not during inference). By randomly deactivating neurons during training, one can force the network to learn with an ever-changing structure, thereby incentivzing all neurons to take part to the training. The code below shows how to use dropout in a PyTorch model definition:

class NeuralNetwork(torch.nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
        super(NeuralNetwork, self).__init__()
        self.inputsiz = inpsiz
		self.dropout = torch.nn.Dropout(p=0.5)
        self.l1 = torch.nn.Linear(inpsiz, hidensiz)
        self.relu = torch.nn.ReLU()
        self.l2 = torch.nn.Linear(hidensiz, numclases)

    def forward(self, y):
        outp = self.l1(y)
        outp = self.relu(outp)
		outp = self.dropout(outp)
        outp = self.l2(outp)

        return outp

A dropout that will randomly deactivate half of the neuron’s layer is defined line 5, and is used in the forward pass line 13.


Pruning refers to dropping connections between neurons, therefore making the model slimmer. Pruning a neural network begs the question of identifying the parts of it which should be pruned. This can be done by considering the magnitude of the neurons’ weights (because small weights may not contribute much to the overall result) or their relative importance towards the model’s output as a whole.

In PyTorch, pruning based on the weights’ magnitude may be done with the ln_structured function:

import torch
import torch.nn.utils.prune

# An example model
class NeuralNetwork(torch.nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
        super(NeuralNetwork, self).__init__()
        self.inputsiz = inpsiz
        self.l1 = torch.nn.Linear(inpsiz, hidensiz)
        self.relu = torch.nn.ReLU()
        self.l2 = torch.nn.Linear(hidensiz, numclases)

    def forward(self, y):
        outp = self.l1(y)
        outp = self.relu(outp)
        outp = self.l2(outp)

        return outp

model = NeuralNetwork(784, 100, 10)

torch.nn.utils.prune.ln_structured(model.l1, name="weight", amount=0.5, n=2, dim=0)

The line 24 is responsible for the pruning, where the first layer of the model is half-pruned according to the l2 norm of its weights.


Instead of dropping neurons, one may reduce their precision (i.e. the number of bytes used to store their weights) and thus the computing power needed to make use of them. This is called quantization. There exists 3 ways to quantize a model.

Dynamic quantization

Quantization may be done directly after the model is instantiated. In that case, the way to quantize is chosen at runtime and is done immediately.

model = NeuralNetwork(784, 100, 10)

    model, {"l1": torch.quantization.default_dynamic_qconfig}

Adjusted quantization

Quantization can be calibrated (i.e. choose the right algorithm to convert floating point numbers to less precise ones) by using the data that is supposed to go through the model. This is done on a test dataset once the model has been trained:

class NeuralNetworkQuant(torch.nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
        super(NeuralNetworkQuant, self).__init__()
        self.quant = torch.quantization.QuantStub()
        self.inputsiz = inpsiz
        self.l1 = torch.nn.Linear(inpsiz, hidensiz)
        self.relu = torch.nn.ReLU()
        self.l2 = torch.nn.Linear(hidensiz, numclases)
        self.dequant = torch.quantization.DeQuantStub()

    def forward(self, y):
        outp = self.l1(y)
        outp = self.relu(outp)
        outp = self.l2(outp)

        return outp

model = NeuralNetworkQuant(784, 100, 10)
# The defualt config quantize to int8
model.qconfig = torch.quantization.get_default_qconfig("fbgemm")
model_fp32_prepared = torch.quantization.prepare(model)

testldr =, batch_size=1024, shuffle=True)
for idx, (imgs, lbls) in enumerate(testldr):
    imgs = imgs.reshape(-1, 28 * 28)

model_int8 = torch.quantization.convert(model_fp32_prepared)

How the model is supposed to be quantized and de-quantized is added to the model class on line 4 and 9. Line 22 prepares the model for quantization according to the configuration of line 21. Line 24-27 create a test dataset and run it through the prepared model so that the quantization process can be adjusted. Once the calibration is done, the model is quantized to int8 at line 29.

Quantization At Training

Quantization At Training (QAT) refers to optimizing the quantization strategy during the training of the model, which allows the model to optimize its weights while being aware of the quantization:

model = NeuralNetworkQuant(784, 100, 10)
model.qconfig = torch.quantization.get_default_qat_qconfig("fbgemm")
model_fp32_prepared = torch.quantization.prepare_qat(model)


model_int8 = torch.quantization.convert(model_fp32_prepared)

This looks similar to the previous example, except that the training loop is done on the model_fp32_prepared model.

Can the trend towards bigger deep learning models be reverted? While research (e.g. Han et al., 2015; Howard et al., 2017) is pushing towards that goal, efficiency needs to be a priority.

Thursday, 30 June 2022

Gradual Explorations of Filesystems, Paging and L4Re

A surprising three years have passed since my last article about my efforts to make a general-purpose filesystem accessible to programs running in the L4 (or L4Re) Runtime Environment. Some of that delay was due to a lack of enthusiasm about blogging for various reasons, much more was due to having much of my time occupied by full-time employment involving other technologies (Python and Django mostly, since you ask) that limited the amount of time and energy that could be spent focusing on finding my way around the intricacies of L4Re.

In fact, various other things I looked into in 2019 (or maybe 2018) also went somewhat unreported. I looked into trying to port the “user mode” (UX) variant of the Fiasco.OC microkernel to the MIPS architecture used by the MIPS Creator CI20. This would have allowed me to conveniently develop and test L4Re programs in the GNU/Linux environment on that hardware. I did gain some familiarity with the internals of that software, together with the Linux ptrace mechanism, making some progress but not actually getting to a usable conclusion. Recommendations to use QEMU instead led me to investigate the situation with KVM on MIPS, simply to try and get half-way reasonable performance: emulation is otherwise rather slow.

You wouldn’t think that running KVM on anything other than Intel/AMD or ARM architectures were possible if you only read the summary on the KVM project page or the Debian Wiki’s KVM page. In fact, KVM is supported on multiple architectures including MIPS, but the latest (and by now very old 3.18) “official” kernel for the CI20 turned out to be too old to support what I needed. Or at least, I tried to get it to work but even with all the necessary configuration to support “trap and emulate” on a CPU without virtualisation support, it seemed to encounter instructions it did not emulate. As the hot summer of 2019 (just like 2018) wound down, I switched back to using my main machine at the time: an ancient Pentium 4 system that I didn’t want heating the apartment; one that could run QEMU rather slowly, albeit faster than the CI20, but which gave me access to Fiasco.OC-UX once again.

Since then, the hard yards of upstreaming Linux kernel device support for the CI20 has largely been pursued by the ever-patient Nikolaus Schaller, vendor of the Letux 400 mini-notebook and hardware designer of the Pyra, and a newer kernel capable of running KVM satisfactorily might now be within reach. That is something to be investigated somewhere in the future.

Back to the Topic

In my last article on the topic of this article, I had noted that to take advantage of various features that L4Re offers, I would need to move on from providing a simple mechanism to access files through read and write operations, instead embracing the memory mapping paradigm that is pervasive in L4Re, adopting such techniques to expose file content to programs. This took us through a tour of dataspaces, mapping, pages, flexpages, virtual memory and so on. Ultimately, I had a few simple programs working that could still read and write to files, but they would be doing so via a region of memory where pages of this memory would be dynamically “mapped” – made available – and populated with file content. I even integrated the filesystem “client” library with the Newlib C library implementation, but that is another story.

Nothing is ever simple, though. As I stressed the test programs, introducing concurrent access to files, crashes would occur in the handling of the pages issued to the clients. Since I had ambitiously decided that programs accessing the same files would be able to share memory regions assigned to those files, with two or more programs being issued with access to the same memory pages if they happened to be accessing the same areas of the underlying file, I had set myself up for the accompanying punishment: concurrency errors! Despite the heroic help of l4-hackers mailing list regulars (Frank and Jean), I had to concede that a retreat, some additional planning, and then a new approach would be required. (If nothing else, I hope this article persuades some l4-hackers readers that their efforts in helping me are not entirely going to waste!)

Prototyping an Architecture

In some spare time a couple of years ago, I started sketching out what I needed to consider when implementing such an architecture. Perhaps bizarrely, given the nature of the problem, my instinct was to prototype such an architecture in Python, running as a normal program on my GNU/Linux system. Now, Python is not exactly celebrated for its concurrency support, given the attention its principal implementation, CPython, has often had for a lack of scalability. However, whether or not the Python implementation supports running code in separate threads simultaneously, or whether it merely allows code in threads to take turns running sequentially, the most important thing was that I could have code happily running along being interrupted at potentially inconvenient moments by some other code that could conceivably ruin everything.

Fortunately, Python has libraries for threading and provides abstractions like semaphores. Such facilities would be all that was needed to introduce concurrency control in the different program components, allowing the simulation of the mechanisms involved in acquiring memory pages, populating them, issuing them to clients, and revoking them. It may sound strange to even consider simulating memory pages in Python, which operates at another level entirely, and the issuing of pages via a simulated interprocess communication (IPC) mechanism might seem unnecessary and subject to inaccuracy, but I found it to be generally helpful in refining my approach and even deepening my understanding of concepts such as flexpages, which I had applied in a limited way previously, making me suspect that I had not adequately tested the limits of my understanding.

Naturally, L4Re development is probably never done in Python, so I then had the task of reworking my prototype in C++. Fortunately, this gave me the opportunity to acquaint myself with the more modern support in the C++ standard libraries for threading and concurrency, allowing me to adopt constructs such as mutexes, condition variables and lock guards. Some of this exercise was frustrating: C++ is, after all, a lower-level language that demands more attention to various mundane details than Python does. It did suggest potential improvements to Python’s standard library, however, although I don’t really pay any attention to Python core development any more, so unless someone else has sought to address such issues, I imagine that Python will gain even more in the way of vanity features while such genuine deficiencies and omissions remain unrecognised.

Transplanting the Prototype

Upon reintroducing this prototype functionality into L4Re, I decided to retain the existing separation of functionality into various libraries within the L4Re build system – ones for filesystem clients, servers, IPC – also making a more general memory abstractions library, but I ultimately put all these libraries within a single package. At some point, it is this package that I will be making available, and I think that it will be easier to evaluate with all the functionality in a single bundle. The highest priority was then to test the mechanisms employed by the prototype using the same concurrency stress test program, this originally being written in Python, then ported to C++, having been used in my GNU/Linux environment to loosely simulate the conditions under L4Re.

This stress testing exercise eventually ended up working well enough, but I did experience issues with resource limits within L4Re as well as some concurrency issues with capability management that I should probably investigate further. My test program opens a number of files in a number of threads and attempts to read various regions of these files over and over again. I found that I would run out of capability slots, these tracking the references to other components available to a task in L4Re, and since each open file descriptor or session would require a slot, as would each thread, I had to be careful not to exceed the default budget of such slots. Once again, with help from another l4-hackers participant (Philipp), I realised that I wasn’t releasing some of the slots in my own code, but I also learned that above a certain number of threads, open files, and so on, I would need to request more resources from the kernel. The concurrency issue with allocating individual capability slots remains unexplored, but since I already wrap the existing L4Re functionality in my own library, I just decided to guard the allocation functionality with semaphores.

With some confidence in the test program, which only accesses simulated files with computed file content, I then sought to restore functionality accessing genuine files, these being the read-only files already exposed within L4Re along with ext2-resident files previously supported by my efforts. The former kind of file was already simulated in the prototype in the form of “host” files, although L4Re unhelpfully gives an arbitary (counter) value for the inode identifiers for each file, so some adjustments were required. Meanwhile, introducing support for the latter kind of file led me to update the bundled version of libext2fs I am using, refine various techniques for adapting the upstream code, introduce more functionality to help use libext2fs from my own code (since libext2fs can be rather low-level), and to consider the broader filesystem support architecture.

Here is the general outline of the paging mechanism supporting access to filesystem content:

Paging data structures

The data structures employed to provide filesystem content to programs.

It is rather simplistic, and I have practically ignored complicated page replacement algorithms. In practice, pages are obtained for use when a page fault occurs in a program requesting a particular region of file content, and fulfilment of this request will move a page to the end of a page queue. Any independent requests for pages providing a particular file region will also reset the page’s position in the queue. However, since successful accesses to pages will not cause programs to repeatedly request those pages, eventually those pages will move to the front of the queue and be reclaimed.

Without any insight into how much programs are accessing a page successfully, relying purely on the frequency of page faults, I imagine that various approaches can be adopted to try and assess the frequency of accesses, extrapolating from the page fault frequency and seeking to “bias” or “weight” pages with a high frequency of requests so that they move through the queue more slowly or, indeed, move through a queue that provides pages less often. But all of this is largely a distraction from getting a basic mechanism working, so I haven’t directed any more time to it than I have just now writing this paragraph!

Files and File Sessions

While I am quite sure that I ended up arriving at a rather less than optimal approach for the paging architecture, I found that the broader filesystem architecture also needed to be refined further as I restored the functionality that I had previously attempted to provide. When trying to support shared access to file content, it is appropriate to implement some kind of registry of open files, these retaining references to objects that are providing access to each of the open files. Previously, this had been done in a fairly simple fashion, merely providing a thread-safe map or dictionary yielding the appropriate file-related objects when present, otherwise permitting new objects to be registered.

Again, concurrency issues needed closer consideration. When one program requests access to a file, it is highly undesirable for another program to interfere during the process of finding the file, if it exists already, or creating the file, if it does not. Therefore, there must be some kind of “gatekeeper” for the file, enforcing sequential access to filesystem operations involving it and ensuring that any preparatory activities being undertaken to make a file available, or to remove a file, are not interrupted or interfered with. I came up with an architecture looking like this, with a resource registry being the gatekeeper, resources supporting file sessions, providers representing open files, and accessors transferring data to and from files:

Filesystem access data structures

The data structures employed to provide access to the underlying filesystem objects.

I became particularly concerned with the behaviour of the system around file deletion. On Unix systems, it is fairly well understood that one can “unlink” an existing file and keep accessing it, as long as a file descriptor has been retained to access that file. Opening a file with the same name as the unlinked file under such circumstances will create a new file, provided that the appropriate options are indicated, or otherwise raise a non-existent file error, and yet the old file will still exist somewhere. Any new file with the same name can be unlinked and retained similarly, and so on, building up a catalogue of old files that ultimately will be removed when the active file descriptors are closed.

I thought I might have to introduce general mechanisms to preserve these Unix semantics, but the way the ext2 filesystem works largely encodes them to some extent in its own abstractions. In fact, various questions that I had about Unix filesystem semantics and how libext2fs might behave were answered through the development of various test programs, some being normal programs accessing files in my GNU/Linux environment, others being programs that would exercise libext2fs in that environment. Having some confidence that libext2fs would do the expected thing leads me to believe that I can rely on it at least for some of the desired semantics of the eventual system.

The only thing I really needed to consider was how the request to remove a file when that file was still open would affect the “provider” abstraction permitting continued access to the file contents. Here, I decided to support a kind of deferred removal: if a program requested the removal of a file, the provider and the file itself would be marked for removal upon the final closure of the file, but the provider for the file would no longer be available for new usage, and the file would be unlinked; programs already accessing the file would continue to operate, but programs opening a file of the same name would obtain a new file and a new provider.

The key to this working satisfactorily is that libext2fs will assign a new inode identifier when opening a new file, whereas an unlinked file retains its inode identifier. Since providers are indexed by inode identifier, and since libext2fs translates the path of a file to the inode identifier associated with the file in its directory entry, attempts to access a recreated file will always yield the new inode identifier and thus the new file provider.

Pipes, Listings and Notifications

In the previous implementation of this filesystem functionality, I had explored some other aspects of accessing a filesystem. One of these was the ability to obtain directory listings, usually exposed in Unix operating systems by the opendir and readdir functions. The previous implementation sought to expose such listings as files, this in an attempt to leverage the paging mechanisms already built, but the way that libext2fs provides such listing information is not particularly compatible with the random-access file model: instead, it provides something more like an iterator that involves the repeated invocation of a callback function, successively supplying each directory entry for the callback function to process.

For this new implementation, I decided to expose directory listings via pipes, with a server thread accessing the filesystem and, in that callback function, writing directory entries to one end of a pipe, and with a client thread reading from the other end of the pipe. Of course, this meant that I needed to have an implementation of pipes! In my previous efforts, I did implement pipes as a kind of investigation, and one can certainly make this very complicated indeed, but I deliberately kept this very simple in this current round of development, merely having a couple of memory regions, one being used by the reader and one being used by the writer, with each party transferring the regions to the other (and blocking) if they find themselves respectively running out of content or running out of space.

One necessary element in making pipes work is that of coordinating the reading and writing parties involved. If we restrict ourselves to a pipe that will not expand (or even not expand indefinitely) to accommodate more data, at some point a writer may fill the pipe and may then need to block, waiting for more space to become available again. Meanwhile, a reader may find itself encountering an empty pipe, perhaps after having read all available data, and it may need to block and wait for more content to become available again. Readers and writers both need a way of waiting efficiently and requesting a notification for when they might start to interact with the pipe again.

To support such efficient blocking, I introduced a notifier abstraction for use in programs that could be instantiated and a reference to such an instance (in the form of a capability) presented in a subscription request to the pipe endpoint. Upon invoking the wait operation on a notifier, the notifier will cause the program (or a thread within a program) to wait for the delivery of a notification from the pipe, this being efficient because the thread will then sleep, only to awaken if a message is sent to it. Here is how pipes make use of notifiers to support blocking reads and writes:

Communication via pipes employing notifications

The use of notifications when programs communicate via a pipe.

A certain amount of plumbing is required behind the scenes to support notifications. Since programs accessing files will have their own sessions, there needs to be a coordinating object representing each file itself, this being able to propagate notification events to the users of the file concerned. Fortunately, I introduced the notion of a “provider” object in my architecture that can act in such a capacity. When an event occurs, the provider will send a notification to each of the relevant notifier endpoints, also providing some indication of the kind of event occurring. Previously, I had employed L4Re’s IRQ (interrupt request) objects as a means of delivering notifications to programs, but these appear to be very limited and do not allow additional information to be conveyed, as far as I can tell.

One objective I had with a client-side notifier was to support waiting for events from multiple files or streams collectively, instead of requiring a program to have threads that wait for events from each file individually, thus attempting to support the functionality provided by Unix functions like select and poll. Such functionality relies on additional information indicating the kind of event that has occurred. The need to wait for events from numerous sources also inverts the roles of client and server, with a notifier effectively acting like a server but residing in a client program, waiting for messages from its clients, these typically residing in the filesystem server framework.

Testing and Layering

Previously, I found that it was all very well developing functionality, but only through a commitment to testing it would I discover its flaws. When having to develop functionality at a number of levels in a system at the same time, testing generally starts off in a fairly limited fashion. Initially, I reintroduced a “block” server that merely provides access to a contiguous block of data, this effectively simulating storage device access that will hopefully be written at some point, and although genuine filesystem support utilises this block server, it is reassuring to be able to know whether it is behaving correctly. Meanwhile, for programs to access servers, they must send requests to those servers, assisted by a client library that provides support for such interprocess communication at a fairly low level. Thus, initial testing focused on using this low-level support to access the block server and verify that it provides access to the expected data.

On top of the lowest-level library functionality is a more usable level of “client” functions that automates the housekeeping needing to be done so that programs may expect an experience familiar to that provided by traditional C library functionality. Again, testing of file operations at that level helped to assess whether library and server functionality was behaving in line with expectations. With some confidence, the previously-developed ext2 filesystem functionality was reintroduced and updated. By layering the ext2 filesystem server on top of the block server, the testing activity is actually elevated to another level: libext2fs absolutely depends on properly functioning access to the block device; otherwise, it will not be able to perform even the simplest operations on files.

When acquainting myself with libext2fs, I developed a convenience library called libe2access that encapsulates some of the higher-level operations, and I made a tool called e2access that is able to populate a filesystem image from a normal program. This tool, somewhat reminiscent of the mtools suite that was popular at one time to allow normal users to access floppy disks on a system, is actually a fairly useful thing to have, and I remain surprised that there isn’t anything like it in common use. In any case, e2access allows me to populate images for use in L4Re, but I then thought that an equivalent to it would also be useful in L4Re for testing purposes. Consequently, a tool called fsaccess was created, but unlike e2access it does not use libe2access or libext2fs directly: instead, it uses the “client” filesystem library, exercising filesystem access via the IPC system and filesystem server architecture.

Ultimately, testing will be done completely normally using C library functions, these wrapping the “client” library. At that point, there will be no distinction between programs running within L4Re and within Unix. To an extent, L4Re already supports normal Unix-like programs using C library functions, this being particularly helpful when developing all this functionality, but of course it doesn’t support “proper” filesystems or Unix-like functionality in a particularly broad way, with various common C library or POSIX functions being stubs that do nothing. Of course, all this effort started out precisely to remedy these shortcomings.

Paging, Loading and Running Programs

Beyond explicitly performed file access, the next level of mutually-reinforcing testing and development came about through the simple desire to have a more predictable testing environment. In wanting to be able to perform tests sequentially, I needed control over the initiation of programs and to be able to rely on their completion before initiating successive programs. This may well be possible within L4Re’s Lua-based scripting environment, but I generally find the details to be rather thin on the ground. Besides, the problem provided some motivation to explore and understand the way that programs are launched in the environment.

There is some summary-level information about how programs (or tasks) are started in L4Re – for example, pages 41 onwards of “Memory, IPC, and L4Re” – but not much in the way of substantial documentation otherwise. Digging into the L4Re libraries yielded a confusing array of classes and apparent interactions which presumably make sense to anyone who is already very familiar with the specific approach being taken, as well as the general techniques being applied, but it seems difficult for outsiders to distinguish between the specifics and the generalities.

Nevertheless, some ideas were gained from looking at the code for various L4Re components including Moe (the root task), Ned (the init program), the loader and utilities libraries, and the oddly-named l4re_kernel component, this actually providing the l4re program which itself hosts actual programs by providing the memory management functionality necessary for those programs to work. In fact, we will eventually be looking at a solution that replicates that l4re program.

A substantial amount of investigation and testing took place to explore the topic. There were a number of steps required to initialise a new program:

  1. Open the program executable file and obtain details of the different program segments and the program’s start address, this requiring some knowledge of ELF binaries.
  2. Initialise a stack for the program containing the arguments to be presented to it, plus details of the program’s environment. The environment is of particular concern.
  3. Create a task for the program together with a thread to begin execution at the start address, setting the stack pointer to the appropriate place in where the stack should be made available.
  4. Initialise a control block for the thread.
  5. Start the thread. This should immediately generate a page fault because the memory at the start address is not yet available within the task.
  6. Service page faults for the program, providing pages for the program code – thus resolving that initial page fault – as well as for the stack and other regions of memory.

Naturally, each of these steps entails a lot more work than is readily apparent. Particularly the last step is something of an understatement in terms of what is required: the mechanism by which demand paging of the program is to be achieved.

L4Re provides some support for inspecting ELF binaries in its utilities library, but I found the ELF specification to be very useful in determining the exact purposes of various program header fields. For more practical guidance, the OSDev wiki page about ELF provides an explanation of the program loading process, along with how the different program segments are to be applied in the initialisation of a new program or process. With this information to hand, together with similar descriptions in the context of L4Re, it became possible to envisage how the address space of a new program might be set up, determining which various parts of the program file might be installed and where they might be found. I wrote some test programs, making some use of the structures in the utilities library, but wrote my own functions to extract the segment details from an ELF binary.

I found a couple of helpful resources describing the initialisation of the program stack: “Linux x86 Program Start Up” and “How statically linked programs run on Linux”. These mainly demystify the code that is run when a program starts up, setting up a program before the user’s main function is called, giving a degree of guidance about the work required to set up the stack so that such code may perform as expected. I was, of course, also able to study what the various existing L4Re components were doing in this respect, although I found the stack abstractions used to be idiomatic C/C++ bordering on esoteric. Nevertheless, the exercise involves obtaining some memory that can eventually be handed over to the new program, populating that memory, and then making it available to the new program, either immediately or on request.

Although I had already accumulated plenty of experience passing object capabilities around in L4Re, as well as having managed to map memory between tasks by sending the appropriate message items, the exact methods of setting up another task with memory and capabilities had remained mysterious to me, and so began another round of experimentation. What I wanted to do was to take a fairly easy route to begin with: create a task, populate some memory regions containing the program code and stack, transfer these things to the new task (using the l4_task_map function), and then start the thread to run the program, just to see what happened. Transferring capabilities was fairly easily achieved, and the L4Re libraries and frameworks do employ the equivalent of l4_task_map in places like the Remote_app_model class found in libloader, albeit obfuscated by the use of the corresponding C++ abstractions.

Frustratingly, this simple approach did not seem to work for the memory, and I could find only very few cases of anyone trying to use l4_task_map (or its equivalent C++ incantations) to transfer memory. Despite the memory apparently being transferred to the new task, the thread would immediately cause a page fault. Eventually, a page fault is what we want, but that would only occur because no memory would be made available initially, precisely because we would be implementing a demand paging solution. In the case of using l4_task_map to set up program memory, there should be no new “demand” for pages of such memory, this demand having been satisfied in advance. Nevertheless, I decided to try and get a page fault handler to supply flexpages to resolve these faults, also without success.

Having checked and double-checked my efforts, an enquiry on the l4-hackers list yielded the observation that the memory I had reserved and populated had not been configured as “executable”, for use by code in running programs. And indeed, since I had relied on the plain posix_memalign function to allocate that memory, it wasn’t set up for such usage. So, I changed my memory allocation strategy to permit the allocation of appropriately executable memory, and fortunately the problem was solved. Further small steps were then taken. I sought to introduce a region mapper that would attempt to satisfy requests for memory regions occurring in the new program, these occurring because a program starting up in L4Re will perform some setting up activities of its own. These new memory regions would be recognised by the page fault handler, with flexpages supplied to resolve page faults involving those regions. Eventually, it became possible to run a simple, statically-linked program in its own task.

Supporting program loading with an external page fault handler

When loading and running a new program, an external page fault handler makes sure that accesses to memory are supported by memory regions that may be populated with file content.

Up to this point, the page fault handler had been external to the new task and had been supplying memory pages from its own memory regions. Requests for data from the program file were being satisfied by accessing the appropriate region of the file, this bringing in the data using the file’s paging mechanism, and then supplying a flexpage for that part of memory to the program running in the new task. This particular approach compels the task containing the page fault handler to have a memory region dedicated to the file. However, the more elegant solution involves having a page fault handler communicating directly with the file’s pager component which will itself supply flexpages to map the requested memory pages into the new task. And to be done most elegantly, the page fault handler needs to be resident in the same task as the actual program.

Putting the page fault handler and the actual program in the same task demanded some improvements in the way I was setting up tasks and threads, providing capabilities to them, and so on. Separate stacks need to be provided for the handler and the program, and these will run in different threads. Moving the page fault handler into the new task is all very well, but we still need to be able to handle page faults that the “internal” handler might cause, so this requires us to retain an “external” handler. So, the configuration of the handler and program are slightly different.

Another tricky aspect of this arrangement is how the program is configured to send its page faults to the handler running alongside it – the internal handler – instead of the one servicing the handler itself. This requires an IPC gate to be created for the internal handler, presented to it via its configuration, and then the handler will bind to this IPC gate when it starts up. The program may then start up using a reference to this IPC gate capability as its “pager” or page fault handler. You would be forgiven for thinking that all of this can be quite difficult to orchestrate correctly!

Configuring the communication between program and page fault handler

An IPC gate must be created and presented to the page fault handler for it to bind to before it is presented to the program as its “pager”.

Although I had previously been sending flexpages in messages to satisfy map requests, the other side of such transactions had not been investigated. Senders of map requests will specify a “receive window” to localise the placement of flexpages returned from such requests, this being an intrinsic part of the flexpage concept. Here, some aspects of the IPC system became more prominent and I needed to adjust the code generated by my interface description language tool which had mostly ignored the use of message buffer registers, employing them only to control the reception of object capabilities.

More testing was required to ensure that I was successfully able to request the mapping of memory in a particular region and that the supplied memory did indeed get mapped into the appropriate place. With that established, I was then able to modify the handler deployed to the task. Since the flexpage returned by the dataspace (or resource) providing access to file content effectively maps the memory into the receiving task, the page fault handler does not need to explicitly return a valid flexpage: the mapping has already been done. The semantics here were not readily apparent, but this approach appears to work correctly.

The use of an internal page fault handler with a new program

An internal page fault handler satisfies accesses to memory from the program running in the same task, providing it with access to memory regions that may be populated with file content.

One other detail that proved to be important was that of mapping file content to memory regions so that they would not overlap somehow and prevent the correct region from being used to satisfy page faults. Consider the following regions of the test executable file described by the readelf utility (with the -l option):

  Type           Offset             VirtAddr           PhysAddr
                 FileSiz            MemSiz              Flags  Align
  LOAD           0x0000000000000000 0x0000000001000000 0x0000000001000000
                 0x00000000000281a6 0x00000000000281a6  R E    0x1000
  LOAD           0x0000000000028360 0x0000000001029360 0x0000000001029360
                 0x0000000000002058 0x0000000000008068  RW     0x1000

Here, we need to put the first region providing the program code at a virtual address of 0x1000000, having a size of at least 0x281a6, populated with exactly that amount of content from the file. Meanwhile, we need to put the second region at address 0x1029360, having a size of 0x8068, but only filled with 0x2058 bytes of data. Both regions need to be aligned to addresses that are multiples of 0x1000, but their contents must be available at the stated locations. Such considerations brought up two apparently necessary enhancements to the provision of file content: the masking of content so that undefined areas of each region are populated with zero bytes, this being important in the case of the partially filled data region; the ability to support writes to a region without those writes being propagated to the original file.

The alignment details help to avoid the overlapping of regions, and the matter of populating the regions can be managed in a variety of ways. I found that since file content was already being padded at the extent of a file, I could introduce a variation of the page mapper already used to manage the population of memory pages that would perform such padding at the limits of regions defined within files. For read-only file regions, such a “masked” page mapper would issue a single read-only page containing only zero bytes for any part of a file completely beyond the limits of such regions, thus avoiding the allocation of lots of identical pages. For writable regions that are not to be committed to the actual files, a “copied” page mapper abstraction was introduced, this providing copy-on-write functionality where write accesses cause new memory pages to be allocated and used to retain the modified data.

Some packaging up of the functionality into library routines and abstractions was undertaken, although as things stand more of that still needs to be done. I haven’t even looked into support for dynamic library loading, nor am I handling any need to extend the program stack when that is necessary, amongst other things, and I also need to make the process of creating tasks as simple as a function call and probably also expose the process via IPC in order to have a kind of process server. I still need to get back to addressing the lack of convenient support for the sequential testing of functionality.

But I hope that much of the hard work has now already been done. Then again, I often find myself climbing one particular part of this mountain, thinking that the next part of the ascent will be easier, only to find myself confronted with another long and demanding stretch that brings me only marginally closer to the top! This article is part of a broader consolidation process, along with writing some documentation, and this process will continue with the packaging of this work for the historical record if nothing else.

Conclusions and Reflections

All of this has very much been a learning exercise covering everything from the nuts and bolts of L4Re, with its functions and abstractions, through the design of a component architecture to support familiar, intuitive but hard-to-define filesystem functionality, this requiring a deeper understanding of Unix filesystem behaviour, all the while considering issues of concurrency and resource management that are not necessarily trivial. With so much going on at so many levels, progress can be slow and frustrating. I see that similar topics and exercises are pursued in some university courses, and I am sure that these courses produce highly educated people who are well equipped to go out into the broader world, developing systems like these using far less effort than I seem to be applying.

That leads to the usual question of whether such systems are worth developing when one can just “use Linux” or adopt something already under development and aimed at a particular audience. As I note above, maybe people are routinely developing such systems for proprietary use and don’t see any merit in doing the same thing openly. The problem with such attitudes is that experience with the development of such systems is then not broadly cultivated, the associated expertise and the corresponding benefits of developing and deploying such systems are not proliferated, and the average user of technology only gets benefits from such systems in a limited sense, if they even encounter them at all, and then only for a limited period of time, most likely, before the products incorporating such technologies wear out or become obsolete.

In other words, it is all very well developing proprietary systems and celebrating achievements made decades ago, but having reviewed decades of computing history, it is evident to me that achievements that are not shared will need to be replicated over and over again. That such replication is not cutting-edge development or, to use the odious term prevalent in academia, “novel” is not an indictment of those seeking to replicate past glories: it is an indictment of the priorities of those who commercialised them on every prior occasion. As mundane as the efforts described in this article may be, I would hope that by describing them and the often frustrating journey involved in pursuing them, people may be motivated to explore the development of such systems and that techniques that would otherwise be kept as commercial secrets or solutions to assessment exercises might hopefully be brought to a broader audience.

Sunday, 26 June 2022

FSFE Information stand at Veganmania MQ 2022

FSFE Information stand at Veganmania MQ 2022

FSFE information stall on Veganmania MQ 2022

From 3rd to 6th June 2022 happened the Veganmania street festival at the Museumsquartier in Vienna. Despite not happening for two years due to the Corona pandemic this over the years has developed into the biggest vegan street event in Europe with tens of thousands visitors everey day. Of course there have been plenty of food stands with all kinds of climate and animal friendly delicious meals but the festival had also many stands for buying other stuff. In addition many NGO tents were there too to inform about important issues and their work.

Like already tradition for many years also the local volunteers group manned an FSFE information stand from Friday noon until Monday night. It was exhausting because only two volunteers manned the stand. But we both stayed there the whole time and the interest of so many people had confirmed once more how well we optimized our information material assortment without losing the ability to bring everything at once using just a bicycle.

The front of our stall was covered with a big FSFE banner while the sides are used for posters explaining the four freedoms and GnuPG email encryption. (We very soon need to replace our old posters with more durable water resistant paper since the old one has gotten rather worn down and doesn’t look very sleek any more with all the tape pieces it is hold together.) In addition we use a small poster stand we built ourselves with just two wooden plates and a hinge. This was of left over material from a DIY center. Unfortunately this time we didn’t have any wall behind us where we would have been allowed to put any posters or banners on.

Also our usual leaflet stack has proven to be very handy. Since most people talking to us are not yet familiar with free software the most important piece is probably our quick overview of 10 different free software distributions. It is just printed in black and white in a copy shop but on thick striking orange paper. This way of production is rather important because it is very easy and not to costly to quickly print out more if we need to. It also allows us to adapt it often for new developments because we don’t have a big stack which might become outdated. Experience also shows that the generous layout with enough border space to write ad-hoc or very personalised links onto the matte paper comes in handy in almost all conversations. The thick paper it is printed on gives it also a much more valuable touch.

Less tech savvy people find a good first information in our local version of the Freedom leaflet which is basically RMS book Free Software, Free Society distilled into a leaflet. It combines this basic conceptual introduction using tools as comparison to software with some practical internet links to things like privacy friendly search engines, and a searchable free software catalogue.

Our leaflets Free Your Android and the one on GPG email encryption find much interest too. And of course many people like taking the There is no cloud … just other peoples computers and some other creative stickers with them. Our beautiful leaflet on free games attracts people too but this time after the event someone reported back that our link led him to kind of a Japanese porn page. After some back and fourth we discovered he tried to type a capital o instead of a zero in the link: We did use this folder for years and he was the first to report back that the link didn’t work for him. Unfortunately we still have many of those games leaflets. I suspect in future we should point out that the link contains a zero and no capital letter.

We also consider putting together a more detailed walk-through for installing free software on a computer and we often hinted people to check out different distributions by visiting

Normally we don’t even have time to get something to eat because we usually talk to people until after all other stands have closed up. But because we do not have a tent we needed to protect our material on two evenings from the storm in the night. So we closed up an hour early (about 9pm) and could still get some delicious snacks.

It has been a very busy and productive information stand with lots of constructive talks and we are looking forward to the next information stall on the Veganmania in August on the Danube island. Hopefully we have managed to renew our posters and information material by then.

Recent Readings of Ada & Zangemann

In June, I did several readings of "Ada & Zangeamnn - A tale of software, skateboards, and raspberry ice cream" in German.

The first one was on 2 June at the public library in Cologne Porz, which also acts as a makerspace. There I was reading the book to visitors of the public library, which resulted in a quite diverse audience considering age and technical background.

At this reading, I also for the first time met the people from O'Reilly with whom I worked in the last months: my editor Ariane Hesse and Corina Pahrmann, who is doing the marketing for the book. So I had the opportunity to thank the two of them in person for all the great work they did to write and market the book.

Here is the nice view while preparing for the reading next to the library at the Rhine.

The book "Ada & Zangemann" outside the public library in Cologne Porz before a reading.

On 9 June, I was doing a reading at the re:publica 22 conference in Berlin. The reading took place in the fablab bus, which is a mobile hackerspace to reach young people in areas which do not have local hackerspaces. Briefly before the start of the reading, Volker Wissing, the German Federal Minister for Digital Affairs visited the bus. We had a quick chat with the children and briefly talked with him about the book, tinkering, and the self-determined use of software. As he was interested in the book, I gave him one of my copies I had with me.

After the reading there was a longer discussion with the children and adults about ideas for another book on the topic.

Volker Wissing, German Federal Minister for Digital Affairs and Matthias Kirschner with the book "Ada&Zangemann" before the reading at re:publica in the fablab bus

Finally, on 24 June, on the Germany's federal digital day, the city of Offenburg invited me to do a reading of the book. In the beginning it was planned to read the book towards 2 school classes. But over the time more and more teachers contacted the organisers and expressed interest to also participate in the reading with their classes. In the end, the reading took place in the largest cinema room in the city of Offenburg with over 150 third-graders from different schools in Offenburg and the Chief Mayor Marco Steffens was doing the introduction.

As with the other readings I did until now, the part I enjoy the most, are the questions and comments by children. Although I also enjoy the discussions with grown-ups, it is so encouraging to see how children think about the topics of the book, what ideas and plans they have. In the end the organisers had to stop the discussion, because the children had to go back to school. I was also amazed how concentrated the children were following the story for the 35 minutes of the reading before we had the discussion.

The room before the reading, which can fit 365 people, and which offered enough room to have some space between the different classes.

The largest cinema room in Offenburg before the reading of "Ada&Zangemann" to over 150 3rd graders from Offenburg schools.

Some stickers and bookmarks I brought with me in front of the room:

Ada and Zangemann stickers, FSFE Free Software stickers, and Ada&Zangemann bookmarks on a desk in the cinema entrance area before the reading of the book in Offenburg

Chief Mayor Marco Steffens was doing an introduction before my reading:

Chief Mayor Marco Steffens and Matthias Kirschner in the front of the largest cinema room in Offenburg before the reading of "Ada&Zangemann" in front of over 150 3rd graders from Offenburg schools.

Myself at the reading, while showing one of my favourite illustrations by Sandra Brandstätter:

Matthias Kirschner reading "Ada&Zangemann" to over 150 3rd graders from Offenburg in the largest cinema room in Offenburg

After the reading I joined the people from the local hackerspace Section 77 at their booth for the Digital Day and had further interesting discussions with them and other people attending the event.

I am already looking forward to future readings of the book and discussions with people, young and old, about the topics of the book.

Sunday, 19 June 2022

Reproducible Builds – Telling of a Debugging Story

Reproducibility is an important tool to empower users. Why would a user care about that? Let me elaborate.

For a piece of software to be reproducible means that everyone with access to the software’s source code is able to build the binary form of it (e.g. the executable that gets distributed). What’s the matter? Isn’t that true for any project with accessible source code? Not at all. Reproducibility means that the resulting binary EXACTLY matches what gets distributed. Each and every bit and byte of the binary is exactly the same, no matter on which machine the software gets built.

The benefit of this is that on top of being able to verify that the source code doesn’t contain any spyware or unwanted functionality, the user now is also able to verify that the distributable they got e.g. from an app store has no malicious code added into it. If for example the Google Play Store would inject spyware into an application submitted by a developer, the resulting binary that gets distributed via the store would be different from the binary built by the user.

Why is this important? Well, Google already requires developers to submit their signing keys, so they can modify software releases after the fact. Now, reproducibility becomes a tool to verify that Google did not tamper with the binaries.

I try to make PGPainless build reproducible as well. A few months ago I added some lines to the build script which were supposed to make the project reproducible by using static file modification dates, as well as a deterministic file order in the JAR archive.

    // Reproducible Builds
    tasks.withType(AbstractArchiveTask) {
        preserveFileTimestamps = false
        reproducibleFileOrder = true

It took a bit more tinkering back then to get it to work though, as I was using a Properties file written to disk during build time to access the libraries version during runtime, and it turns out that the default Writer for Properties files includes the current time and date in a comment line. This messed up reproducibility, as now that file would be different each time the project got built. I eventually managed to fix that though by writing the file myself using a custom Writer. When I tested my build script back then, both my laptop and my desktop PC were able to build the same exact JAR archive. I thought I was done with reproducibility.

Today I drafted another release for PGPainless. I noticed that my table of reprodubile build hashes for each release was missing the checksums for some recent releases. I quickly checked out those releases, computed the checksums and updated the table. Then I randomly chose the 1.2.2 release and decided to check if the checksum published to maven central still matches my local checksum. And to my surprise it didn’t! Was this a malicious act from Maven Central?

Release 1.2.2 was created while I was on my Journey through Europe, so I had used my Laptop to draft the release. So the first thing I did was grab the laptop, checkout the releases source in git and build the binaries. Et voila, I got checksums matching those on Maven Central. So it wasn’t an attack, but for some reason my laptop was producing different binaries than my main machine.

I transferred the “faulty” binaries over to my main machine to compare them in more detail. First I tried Meld, which is a nice graphical diff tool for text files. It completely froze though, as apparently it is not so great for comparing binary files.

Next I decompressed both binaries and compared the resulting folders. Meld did not report any differences, the directories matched perfectly. What the heck?

Next I tried diff 1.jar 2.jar which very helpfully displayed the message “Binary files 1.jar and 2.jar are different”. Thanks for nothing. After some more research, I found out that you could use the flag --text to make diff spit out more details. However, the output was not really helpful either, as the binary files were producing lots of broken output in the command line.

I did some research and found that there were special diff tools for JAR files. Checking out one project called jardiff looked promising initially, but eventually it reported that the files were identical. Hm…

Then I opened both files in ghex to inspect their byte code in hexadecimal. By chance I spotted some differences near the end of the files.

The same spots in the other JAR file look identical, but the A4 got replaced with B4 in the other file. Strange. I managed to find another command which found and displayed all places in the JAR files which had mismatches:

$ cmp -l 1.jar 2.jar | gawk '{printf "%08X %02X %02X\n", $1, strtonum(0$2), strtonum(0$3)}'
00057B80 ED FD
00057BB2 ED FD
00057BEF A4 B4
00057C3C ED FD
00057C83 A4 B4
00057CDE A4 B4
00057D3F A4 B4
00057DA1 A4 B4

Weird, in many places ED got changed to DF and A4 got changed into B4 in what looked like some sort of index near the end of the JAR file. At this point I was sure that my answers would unfortunately lay within the ZIP standard. Why ZIP? For what I understand, JAR files are mostly ZIP files. Change the file ending from .jar to .zip and any standard ZIP tool will be able to extract your JAR file. There are probably nuances, but if there are, they don’t matter for the sake of this post.

The first thing I did was to check which versions of zip were running on both of my machines. To my surprise they matched and since I wasn’t even sure if JAR files would be generated using the standard zip tool, this was a dead end for me. Searching the internet some more eventually lead me to this site describing the file structure for PKZIP files. I originally wasn’t sure if PKZIP was what I was looking for, but I had seen the characters PK when investigating the hex code before, so I gave the site a try.

Somewhere I read The signature of the local file header. This is always '\x50\x4b\x03\x04'. Aha! I just had to search for the octets 50 4B 03 04 in my file! It should be in approximation to the bytes in question so I just had to read backwards until I found them. Aaaand: 50 4B 01 02 Damn. This wasn’t it. But 01 02 looks so suspiciously non-random, maybe I oversaw something? Let’s continue to read on the website. Aha! The Central directory file header. This is always "\x50\x4b\x01\x02". The section even described its format in a nice table. Now I just had to manually count the octets to determine what exactly differed between the two JAR files.

It turns out that the octets 00 00 A4 81 I had observed to change in the file were labeled as “external file attributes; host-system dependent”. Again, not very self-explanatory but something I could eventually throw into a search engine.

Some post on StackOverflow suggested that this had to do with file permissions. Apparently ZIP files (and by extension also JAR files) would use the external attributes field to store read and write permissions of the files inside the archive. Now the question turned into: “How can I set those to static values?”.

After another hour of researching the internet with permutations of the search terms jar, archive, file permissions, gradle, external attributes, zip I finally stumbled across a bug report in another software project that talked about the same exact issue I had; differing jar files on different machines. In their case, their CI would build the jar file in a docker container and set different file permissions than the locally built file, hence a differing JAR archive.

In the end I found a bug report on the gradle issue tracker, which exactly described my issue and even presented a solution: dirMode and fileMode could be used to statically set permissions for files and directories in the JAR archive. One of the comments in that issue reads:

We should update the documentation.

The bug report was from 3 years ago…

Yes, this would have spared me from 3h of debugging 😉
But I probably would also not have gone onto this little dive into the JAR/ZIP format, so in the end I’m not mad.

Finally, my build script now contains the following lines to ensure reproducibility, no matter the file permissions:

    // Reproducible Builds
    tasks.withType(AbstractArchiveTask) {
        preserveFileTimestamps = false
        reproducibleFileOrder = true

        dirMode = 0755
        fileMode = 0644

Finally my laptop and my desktop machine produce the same binaries again. Hopefully this post can help others to fix their reproducibility issue.

Happy Hacking!

Saturday, 18 June 2022

The KDE Qt5 Patch Collection has been rebased on top of Qt 5.15.5




Commercial release announcement: 

OpenSource release announcement:


I want to personally extend my gratitude to the Commercial users of Qt for beta testing Qt 5.15.5 for the rest of us.


The Commercial Qt 5.15.5 release introduced some bugs that have later been fixed. Thanks to that, our Patchset Collection has been able to incorporate the reverts for those  bugs [1] [2] [3] [4] and the Free Software users will never be affected by those!

Thursday, 02 June 2022

Europe Trip Journal – Entry 30: Thank You for Traveling with Deutsche Bahn

It is time to go home. This morning I woke up at 9:20 and decided to get some breakfast. Afterwards I returned to my room and was greeted by my room mate H. who told me that the cleaning personnel wanted to do their job and that checkout was supposed to be done at 10:00. It was already 10:20, so I hurried and quickly collected all my belongings.

Then I went to the reception to perform the check-out and afterwards to the community area to spend my remaining hour until the train would go. H. also came down at some point and we later said farewell. Then it was time for me to go to the station.

On my way I was sunken in thoughts about political events. As an European it is my opinion that tragic events such as the Uvalde shooting are the result of ludicrous gun ownership regulations. Americans often state personal safety as the reason to why they would need to have a gun. However, if everyone around you has a gun, everyone could shoot you at any second! I personally believe that this is part of the reason why American cops are so quick to open fire. They know that it is very likely that the other person has a gun and therefore presents a threat.

So the obvious solution is less weapons. However, how does this solution apply to Russias war against Ukraine? Isn’t it double-standard to call for less weapons in the USA, while at the same time demanding delivery of weapons to Ukraine?

I’d argue that those are totally different situations. The American people are not at war with another. Putin’s Russia however ruthlessly assaulted the Ukrainian people and as Europeans we need to support them. And in my opinion the best way to do so is by enabling the Ukraine to defend itself, therefore we need to deliver heavy weaponry. In Germany many people have the mantra of “Never again War”. We as Germans must never again go to war. However, this statement is not precise enough. Germany must never again participate in an offensive war. The proposition that Ukraine should surrender parts of their country to Russia as part of some compromise (as disgustingly phrased by the Emma magazine) in order to settle the war is arrogant and short-sighted. Alice Schwarzer should know better about victim blaming.

Those are hard to swallow pills, but given that Putin has not stopped his attack after nearly the whole world told him to stop shows that there is no other way to stop him than a sufficiently weaponized Ukraine and the message that the west will not leave our friends alone. I arrived at this conclusion and the train station at roughly the same time.

My ICE was waiting at the platform and boarding hat already begun. The DB logo on the train made it strangely clear that my trip was over. I got to my seat and soon the train started rolling. 3 hours or so later, the conductor announced Mannheim, the stop where I was supposed to switch to another ICE to Münster. Up to this point everything had gone smooth and the train was on time. Then it stopped – and the stop was not Mannheim.

The conductor was struggling to tell the guests in German, English and French that this was not our destined stop and that they should keep the doors closed please. This was just an unscheduled, unplanned halt and we would soon continue. So we waited. At some point the train started moving again, only to stop once again a few meters later. My 20 minutes buffer for switching trains was melting away.

Finally the train stopped in Mannheim, with 7 minutes to switch, so I quickly left the train and started looking for the platform I was supposed to go to. The train was scheduled to depart from platform 4. Arriving there, I was greeted by the digital sign reading that the train was instead stopping at platform 2A. Okay. I went down the underpass again and over to platform 2.

Track 2 was still occupied by an RE train. A loudspeaker voice announced that the train was incoming and was expected to stop at platform 2A once again. While the A-section was free, I doubt that the whole length of the train would fit there. Suddenly the sign changed once again, announcing that the train would now again stop on platform 4. What a chaos! So I ran back, as I could already see the train drifting into the station.

Finally I got in and quickly found an unreserved seat. *phew*. And then the conductor announced that there was an issue with the signaling at the station, which meant that the train would depart with a delay. In the end they diverted the train, resulting in a total of 30 minutes of delay. You gotta love the German railway.

After 3 more hours and some announcements of an annoyed train driver we came close to Münster. Outside my window it had began to rain and there was a little thunderstorm going on. This was the first time during my trip that I experienced bad weather. It only had rained briefly while I was taking a shower in Madrid and in Marseille there had been literally a few droplets while I was there, so facing the thunderstorm was another strange signal that my trip was over.

Then the train stopped and I got off. 20 minutes later I entered the bus that would bring me home. I bought one of those new fancy 9€ tickets and the bus took me home. This was the end of my journey. I felt a bit like Bilbo Baggins, coming back to the Shire, carrying a bag and lots of stories to tell.

All in all my trip took 31 days in which I traveled 5161km by train, divided up into 20 separate train rides which took a total of 43h of travel time. I visited a total of 4 foreign countries and the farthest distance to home was 1546km.

During my journey I learned that I’m not too bad at being a foreigner. Its easier for me to get around in a city where nobody knows me than I had previously thought. To me, this first journey was a complete success. In hindsight, I have made mistakes along the way, for example I haven’t really explored Nantes at all. I probably should have spent some more time researching cities before going there, and maybe spending more time per stop is better than rushing from station to station. However, this is okay for me.

All in all this trip has taught me that I like traveling and that traveling is easier than I thought. Surely I will do more trips in the future, maybe even together with a friend.

Wednesday, 01 June 2022

KDE Gear 22.08 release schedule finalized

This is the release schedule the release team agreed on

Dependency freeze is in five weeks (July 7) and feature freeze one after that. 


Get your stuff ready!

Tuesday, 31 May 2022

Europe Trip Journal – Entry 29: Prepare for Reentry

Under some past blog post, Paul Boddie had recommended visiting the Musée de l’Aire et de l’Espace to see some life-sized rockets. Huge thanks for this excellent recommendation!

Today I got up a bit earlier than yesterday, meaning I had to actually pay for breakfast again ;). Unfortunately the coffee was already running out, so I only got a single cup which had to do it. Milk was also empty, so no cereals this time. Well, there was yogurt as a worthy replacement.

Afterwards I left for the metro to the train station Charles de Gaulle Étoile. From there I was supposed to take line B which would bring me within walking distance to the museum. The metro system in Paris works with gates which would only open if you have a valid ticket. Sometimes you’d also need to present the ticket when exiting the station, but normally the exit gates would just open by motion detection.

To get to the platform for line B, I had to once again cross another gate. I presented the metro ticket I had used to get to the train station and the gate opened. Nice. Apparently this ride would count as one single trip.

The ride on the B train was rather short. Only about 10-15 minutes later I exited the train at my destination. I went to the exit and there was another gate. I inserted my ticket, but the gate did not open. I tried again, but the gate only sounded an alarm. Another try, this time with the ticket flipped around, but no success either. The display read something along the lines of “non valable”, invalid.

I looked around to see if there was personnel somewhere. Next to the gate stood a woman in uniform, but she explained that she worked for the airport and could not help me. I walked back to the station building and tried to get in to talk to a person at the counter, however to get there I would need to pass another gate. Fantastic.

Being stuck in a deadlock, I walked back to the gate I wanted to pass. There was a box with a button and a telephone symbol. I pressed the button and after some very loud ringing a person answered the phone. I explained that the gate would not let me pass. After describing the type of ticket, the person told me that my ticket was not valid outside of Paris. Oopsie. The man agreed to send an agent to my position. If nobody arrived withing 5 minutes I should call again.

Some minutes later, nobody had showed up and I was fed up with waiting. While I had stood next to the gate waiting, multiple people had either jumped over it or simply had closely followed others passing the gate. So finally I resolved my issue on my own in some way (details are not important) and continued my way to the museum.

After crossing over a highway bridge, I could suddenly see some white tips reaching over some distant buildings. Exciting! I finally reached the building and eventually found the entrance. I decided to be patient and first visit the “l’Aire”-part of the museum. They actually had a lot of models and even real planes there. One building focused on the pioneers of aviation and displayed some of the first planes humanity ever built.

Another building housed WWI military machines. Yet another hall contained helicopters. Finally one hall was dedicated to space exploration. Here they displayed a plethora of satellites hanging from the ceiling, lots of sounding and ballistic rockets and capsules. Awesome!

Unfortunately my images from inside the museum turned out kind of blurry. Also, I heard some museums don’t like visitors taking pictures, so let this be the only image from the inside. Let’s compensate that by letting me tell you; if you have the chance to go to Paris, check out the Musée de l’Aire et de l’Espace! It’s worth it!

Finally I made my way to the yard outside, where they displayed a dozen or so airplanes and – two full sized Ariane rockets! I had hoped that being able to walk up to one of these behemoths would allow me to fully grasp their scale. However, I have learned that my brain just isn’t capable of making sense of those dimensions.

Speaking of tourists taking touristy pictures…

Taking this image was problematic, as I had to position my phone at quite some distance to the vehicle in order to get it in frame fully, but at the same time my phones camera only had a ten second timer, so I had to sprint to get in position in time.

After drooling over the rockets for a while I got lunch at the museums restaurant and then checked out the other exhibition halls. One of them housed two Concorde hyper-sonic planes. You could enter them and visit the inside. What a marvel of flawed technology.

After watching some more war planes and inspecting some posters of military recruitment propaganda, I ended the day by taking the rocket selfie above and then left for the hostel once again. This time I bought a ticket before boarding the train and half an hour later I was at the hostel. There I sat down in the community area to write the last blog post, as well as parts of this one.

Later I returned to get a pint of beer to let my last night in Paris come to a close. Now its time to go to bed. Tomorrow I will return to Germany.

Europe Trip Journal – Entry 28: The Above and the Beneath

When I visited the roof top lounge of my hostel the day before, I had seen a place I had not yet been to and that a tourist in Paris probably should pay a visit to: The hill Montmartre with the Basilique du Sacré Cœur on top.

Therefore, the next day after getting up, I went down to the hostels breakfast area to get ready for the day. I was a bit late today, it was already 9:50, so the lady at the reception gave me a free breakfast stamp. I still got the full breakfast and did not have to hurry at all, so for me it was a pure win. After the breakfast I left and went to the nearest Metro Station. Some switches later I departed at station Anvers, which is very close to the Sacré Cœur. I could actually already see it from here.

The basilica is a popular attraction for tourists as from its elevated position tourists have a spectacular panorama view all over Paris. I climbed the stairs that lead to the top of the hill and was greeted by pitchmen trying to sell water bottles, small metal Eiffel Towers and other touristy gimmicks. Some of them also sold heart-formed love locks which couples could attach to fences around the church.

Paris truly is a MASSIVELY romantic city

You can visit the Sacré Cœur. It is possible to both go inside (its free), as well as take some 300 or so steps to climb the top. First I went inside the church. A churchmen quickly signaled me to take of my hat. I was a bit disoriented by this claim of respect, but of course being the guest I followed his request.

In retrospect I realize that during my trip I have made lots of photos of churches and cathedrals, even though I’m not a religious person. I believe that many problems of society are directly or indirectly caused by religious fanaticism, ideology and authoritarianism, hence I’m not a big fan. My philosophy is that if you need someone else – like a book – to tell you how to be a good human, you are not a good human. However, you cannot NOT be in impressed by the massive scales of cathedrals built hundreds of years ago. These architectural masterpieces stand testament to the fact that religion can also excel humanity, even though it may only be a side effect of exercising power.

Next I got in line to visit the roof top. There was a sign saying that it would take 292 or so steps to get to the top, so I decided to count. Two thirds the way up, debating in my head whether a downwards step should count as -1, 0 or 1 (what do you think?), I lost track and gave up on counting. On the top was a panoramic gallery which provided a wide view over the city.

From up here, I could clearly make out where train tracks were cutting a swathe through the jungle of buildings that made up the city. Somehow the metaphor of a “Gleiserne Wunde aus Stahl” kept floating around in my mind.

On the way down again, I could make another art piece for my collection of photos of tourists making touristy photos.

Out of the basilica again, I wandered a bit through the streets of Montmartre, just letting the experience rain down on me. There were lots of artists that offered drawing pencil portraits of people. Everyone of them had a collection of drawings on display and most of them looked the same. A bit lost in thoughts on whether you can separate an artist from their work or not, I stumbled across a small restaurant that offered pizza with fondue cheese. Intrigued I decided to give it a try as I was a bit hungry too.

In the end I cannot recommend it unfortunately, as the pizza was actually two halves of a Baguette with some toppings and a layer of half molten fondue cheese on top. Given that it was quite cold that day and I was sitting in the shadows, it did not long for my meal to get cold. I’m not sure what the definition of pizza truly is, but all I can say is that pizza let me down for a second time during my trip.

After this disappointment I decided to get to the next metro station. Metro stations in Paris look funky by the way. Not all entrances do look the same, but there are some that are designed to mimic plants in nature. In my opinion they look rather spooky and remind me of Tim Burton movies.

I decided to explore the metro network of Paris for a change. On my way from the train station to the hostel I had come across a station that had been stuck in my head for some time. Station Arts et Métiers has quite some steampunk vibes. To get there, I had to switch lines once, but then I exited at the station.

This must have been Jules Verne’s favorite metro station! There were signs pointing to the exit that were labelled with “Musée de Arts et Métiers”. Oh, interesting, a museum dedicated to crafts and technology. I followed the signs upstairs and found the museum. Unfortunately it was closed on Mondays. A bit disappointed I decided to maybe check it out the next day.

It got really chilly, so I took the metro back to my hostel. There I remembered that someone had recommended me to pay a visit to the Mozilla offices in Paris. I searched for it on OpenStreetMaps and it actually was super close to my place. So I left the hostel again and 10 minutes later I stood in front of the building. From the outside there was not a single sign that this was Mozilla’s office. I tried one door, but it was locked. In another part of the building I found a door that opened up to a hall with a small reception desk and some guards.

Asking whether I could visit Mozilla turned out a bit complicated, as the guards only could speak very little English and I only very little French. Luckily there was an electrician who could translate. A bit of confusion later one of the guards offered to escort me to the office. Apparently Mozilla does not have regular visitors, as the guard did not know where the office was either. It turned out he spoke German however, so at least I could explain my endeavor a bit better now.

After not finding any signs of Mozilla in the first half of the building, we went to the door that I had tried before and the guard let me in. We drove the elevator up and voila, there were Mozilla signs on the walls. However, unfortunately nobody answered our ringing (it was probably already after closing time) and there was a sign that stated that no non-essential visitors were allowed during the pandemic. So we left the building again and I thanked the guards for their efforts.

The Mozilla wiki said that you could also message Mozilla staff in an IRC channel, however they recently transitioned to matrix and apparently did not yet update the wiki page. I briefly tried to search for a chat room related to the Paris office, but my matrix server kept timing out. Oh you brave, shiny, new and terribly inefficient technology keep to amaze me every time 😉

Back in the hostel I watched some videos and tried to kill some time. My plan was to visit the Eiffel Tower at night to check out the famous light installation. So when the sun started to go down, I once again left for the metro. Arriving at the Eiffel Tower I searched for a grocery store to get some affordable beers. My first address was already closed, while the next closest store was in the process to do so. I quickly hopped in, grabbed two beers without second thought and went to the checkout. Then I walked back to the Eiffel Tower.

During my first visit I had only checked out the summit and then left by crossing the Seine, so I had not seen the large grass area which people used to sit on. This time I wanted to sit down there to get the full experience. I found a nice spot with a hedge to lean against and sat down.

On the other side of the hedge were 2 violinists that played to some pop culture songs from a loudspeaker. I wish they had not. At least one of them was probably very new to playing the violin and was constantly at least 2 full notes off. It sounded horrible, bit I also enjoyed listening to them for some strange reason. It was also interesting to see that some people also unironically seemed to enjoy their playing.

And then the Eiffel Tower started glowing. It was a warm orange-ish light and looked quite pretty against the dawning sky.

As you can see on my grainy face, my camera did its best to lighten up the photo, so just imagine the sky being more of a dark blue :D.

I had heard stories of it being illegal to photograph the light installation of the Eiffel Tower due to copyright issues, but apparently this is only true for professionals. So I guess to be on the safe side of things I need to make some kind of very inappropriate joke now to disqualify from this category.

Police in Paris have finally caught the elusive mime known for masturbating in public and harassing tourists.
In a statement, Police Chief claims “he came quietly”.

I’m sorry. Joke from

And so I sat there, amongst a crowd of other tourists, drinking random French beer, listening to off-key violins and watching the Eiffel Tower. This was my second last night in Paris. It was only now that I realized that this adventure would be over soon. In some sense I was looking forward to get home again, being able to sleep in my own bed again. Not having to worry about sleepless nights due to loud snoring. But this will surely not be my last travel adventure, so I’m not sad that this trip will come to an end.

At some point the light show on the tower changed to a twinkling sparkle, accompanied by an outcry of awe from the crowd. I enjoyed the spectacle for a few more minutes and then went home to the hostel again. And so another day in Paris came to an end.

Monday, 30 May 2022

Europe Trip Journal – Entry 27: Oh Champs-Élysées

Yesterday I had some breakfast in my hostel and after that I took the metro to the Châtelet station which is close to the Seine. When I was in Paris near the start of my trip (which is almost 4 weeks ago at this point) I had walked down the riverside path of the Seine and therefore had missed some points of interest that lay behind the quay wall.

One of these things is the famous Louvre. I did not plan on visiting the inside, as I’m not really into looking at paintings, so I settled on sightseeing the building from the outside. The campus (I’m not actually sure if it all belonged to the Louvre) is huge. You are encircled by large facades and are stared down from statues of famous people on the balconies above. Then, walking though an archway, you enter a large yard with the glass pyramids in the middle.

I still have an idea for some future arts project in case there are artistic people among those who follow my journey so far: Taking pictures of tourists taking pictures of themselves. At the Louvre’s glass pyramids they have placed some granite blocks for tourists to stand on and take quirky images of themselves holding the pyramid from its tip. As a consequence, every tourist takes the same exact image.

Guess what their photo looks like

Behind the pyramids is a park with lots of statues. Some of them show rather questionable motives, like this one of a woman who is clearly uncomfortable of being photographed.

Poor woman must have slipped just before the sculptor immortalized her misery

Imagine modelling for a sculptor and this is the result…

The park also had another water basin for children to play with toy sail boats. It was rather cold this day, I guess it was the coldest day of my journey so far. You can probably tell by the people in this image wearing jackets.

After the park came a big place with an obelisk in the center. As far as I know, this obelisk was a present from an Egypt emperor to a French one.

They even still have packing information on the bottom

Leading from the obelisk down to the Arc de Triomphe is the Avenue des Champs-Élysées. Given the extent to which this street is romanticized in culture, I would have expected it to be spectacular. However, it was a bit underwhelming, as it even took me some street signs to even notice that I was in fact walking down the Champs-Élysées. To me it looked like part shopping promenade, part park lane.

When I reached the Arc de Triomphe, I admit that I may have broken some traffic rules. The arc is standing in the middle of a large roundabout with multiple lanes. Apparently there are underpasses for pedestrians to reach the center without the need to cross the street. However, I did not know that, so I just crossed the traffic lanes when I got the opportunity. Only when I crossed traffic another time on the other side of the arc, I noticed that what I had thought was a metro station was in fact an entrance to the underpass. ¯\_(ツ)_/¯

At this point it became a bit windy, so I decided to get back to the hostel. I found an actual metro station not far away, which took about 15 minutes or so to get me close to the hostel. I must say, that I like the metro. Its a shame that Münster doesn’t have a metro system. Being able to just walk to a station and have the next train be there in less than 10 minutes is quite nice.

Back at the hostel I relaxed a bit in my room, then went shopping for some snacks and later went to the roof terrace. There I got some work done on my laptop. I considered getting a drink, but the prices were exorbitant, so I quickly discarded that plan.

And that already concludes yesterday. Originally I had planned to visit some ESA centers in Paris, but it turns out that they only contain offices and are not open to visitors :/. Someone recommended me to visit the Mozilla office though, so I might try to check that out in the next days.

Saturday, 28 May 2022

Europe Trip Journal – Entry 24 – 26: OpenPGP Email Summit

It’s been a while since the last blog post and a lot has happened. I will try to catch up, but I may not remember all the details. Nevertheless, here we go:

On Thursday morning I got up earlier to get some breakfast at the hostel this time. Last time I had missed it, but not this time. For a handful of euros the hostel was offering breakfast on an all you can eat basis. They had croissants, filtered coffee, orange and apple juice, yogurt and spreads like butter, jam and honey. I enjoyed it.

I briefly met A. again, who wished me good travel and then I left the hostel to get the train to Geneva. The ride through the alps was very impressive. I was looking out of the window most of the time, as the mountains appeared to grow higher and higher. Absurdly high. Every now and then the train made a stop at a small village and some people left the train while others got on. And then the train once more meandered through the green mountains of the french alps.

During the trip I suddenly remembered that Swiss was not part of the EU, so the roaming rules that apply to all EU countries and that allow EU citizens to use mobile internet and telephony abroad while not having to worry about astronomical bills from their provider would not count. I quickly did some research on how bad it would be.

7cts/10KB. What. The. Fuck. I quickly disabled roaming and mobile internet. Guess I’ll be dependent on the availability of Wifi for the next days.

After arriving in Geneva, I had to walk through customs but they did not check my luggage. That would have been annoying to unpack everything. My hotel (I hadn’t found a hostel, so I had opted for an Ibis budget hotel) was on the outskirts of the city, so I had to walk for about an hour, but I quickly found it. Still, the location of the hotel was a bit strange, as I had to pass the currently deserted Palexpo exposition halls and wide, empty parking lots just to get to the building which looked a bit like the staircase of a parking deck. But I had my own room, so hey 🙂 Oh, and they gave me a “free” ticket for public transport during my stay, so that’s nice.

After quickly refreshing, I took the bus to the other end of Geneva to the offices of Proton (formerly Protonmail). They hosted the 6. OpenPGP Email Summit which I was going to attend. When I got to the building, I had to call the office upstairs and one member of the Proton team came down to fetch me. On Thursday we only had an informal meeting of participants that already arrived. The real discussions would take place on Friday and Saturday, although when I entered the office room people already had discussions going.

After the meeting we went to a small bar to get some drinks and a small dinner. This being my first OpenPGP meetup (apart from the Sequoia meeting I was invited to some time ago), it was nice getting to know many of the people I already knew from the internet in person.

When later that evening people started heading back to their hotels I figured this was a good idea for me as well. When everybody was gone, I noticed that I actually didn’t know how to get to my hotel. Organic Maps unfortunately only has offline support for public transport for metros and trams built in, so I searched for a tram route which at least would take me a bit closer to my destination. Unfortunately it turned out that the tram was cancelled when I got to the station. I walked back to the last bus stop I had seen and asked the driver of the next bus, which line would go to the airport. I’m a bit proud of myself, because the driver could not speak English so I had to try in French and apparently it worked out. “Vingt-trois” was the answer.

Now I only had to find out from where the 23 would depart. I asked a lady at the bus stop and she told me which bus to take and where to switch. So at 00:20 I was back at the hotel.

The next morning I got up at 8:00 and without having breakfast I went to the Proton offices. Luckily it turned out they had croissants and coffee there. We had 2 presentations and afterwards everybody gathered potential topics to discuss. Then we voted on all candidates and picked the most popular one for 1h long sessions. There were 3 tracks with 3 sessions each.

The sessions were very productive. I learned some crazy facts about things that enterprise OpenPGP providers would have to deal with and got a good insight into the daily challenges of client developers and the future of the protocol. It will be exciting 🙂

After 8 long, exhausting hours it was time for dinner. We met an Italian restaurant and had some very nice food, beer and wine. I really enjoyed socializing with all these like-minded folks. At some point it was time again to get back to the hostel, but this time I asked someone from the gathering to quickly let me use their phone to find a route.

The night was not very restful. I had eaten too much, so my stomach was complaining a bit and took some time until it finally allowed me to sleep. The next morning I overslept for a few minutes and then had to pack my stuff, as I had to check out again.

After successful check-out I took the bus to the Proton offices again and we had another day of sessions. In the end we collected some actionable items and assigned people to work on those tasks. And then I said farewell and left for my train.

During my stay in Geneva I haven’t taken many photos, mainly because most of the time I was with other people. There was a gentleman’s agreement not to take photos of others, so that’s the reason why this post does not have any pictures.

My train to Paris was supposed to depart at 18:30. At the station I had to walk through customs again and they picked the person in front of me for a detailed control. I was let through though. When I got to the platform the sign said that the train was delayed for 25 minutes. Fine. Then suddenly the sign read that the train was delayed indefinitely. Not good. In the end the train had about 70 minutes delay, so it was 23:30 when I got off in Paris. 2 metro rides later I got to my hostel and checked in.

I want to say thank you to Proton for hosting the event and to everyone who attended the summit and contributed making it such a nice experience 🙂

Now I have to get some sleep. Good night 🙂

Wednesday, 25 May 2022

Europe Trip Journal – Entry 23: A Hike and a Hat

The last night I was constantly reassured of not being alone by the constant snoring of my room mates. Even with ear plugs in I could not overhear their nonverbal consolation. At some point I was even worried for the health of the person sleeping in the bed below me, as the rhythm of their breath was almost two times faster then mine. This couldn’t be healthy!

This was the reason why the next morning I only got up at 11:00. I took a shower, reusing my old shirt as replacement for a towel and then went down to get some breakfast. Unfortunately the hostel only offered breakfast until 10:30, so I had to opt for a cappuccino and a brownie instead. It still tasted nice, so I did not mind too much.

After finishing the brownie (I had downed the cappuccino with a pace that surprised myself), I went back to my bed for an hour or so but then finally it was time to go out. I had not yet seen much of Lyon before, so it was time to change that.

The bridge was shaking when a jogger passed by – also, is that the Eiffel Tower in the distance???

On Wiki Voyage I read that there was a historic part of the city which dated back to the year 1400 or so, so I thought this would be a good place to start. I located one of the historic quarters and started walking there. When I arrived though, I was quite disappointed. Nothing here looked like it could be from the middle ages. So I ditched that plan and simply chose the most interesting looking street to follow down.

On a hill I could make up some church-like building throning over the city. Surely being up there would grant a really nice overview over Lyon! The pass upwards was steep and exhausting so when I finally reached the top after following some serpentines and climbing some stairs, I needed a short pause. In a restaurant I bought a vegetarian salad and a coke and then sat down outside in the sun.

View over the city

Luckily I had taken all the wrong turns on my way up, so the way down was pretty straight forward. I could even take a shortcut, following some very long stairs down which even had a street name attached to them.

After passing some more impressive churches and huge buildings, I found myself in a part of the city which was strangely modern, yet old fashioned. I’m struggling a bit to find the right words to describe quite the feeling I had while walking through the streets and places here, but let me try nonetheless.

This part of the city looked like from sometime in the middle of the 20th century. The houses where white, and there was a large, open places with a splendid fountain in the middle. I suddenly had the strange feeling of living in a world that stood for ideals. This is what a person in the golden age in the 1920s must have felt like, filled with optimism for a brave new world of reasoning and peace. A world of progress, which values achievements and advancement over despotism and conservatism, science over religion. I don’t know why I felt that way, or what made me feel so, but nevertheless it did.

Wandering though a passage way, I came across a store that sold French hats. Since this was expected to be my last stay in France on this tour, it was now or never, so I entered the shop and ended up buying a hat. Surely, wearing this hat I would perfectly blend in with the local population. Later I learned that my hat was from Italy :P. I normally don’t enjoy buying clothing, and a hat is something I would have never really identified myself with before. Still, for some reason this hat made me very happy as it perfectly captured and somehow embodied what I felt in this moment.

Look at my amazing hat!

Back at the hostel, I decided to not get lunch at a restaurant today, but instead use the hostels guest kitchen to cook my own meal. On OpenStreetMaps I located a convenience store only about 5 minutes away. Proudly sporting my new hat I bought some pasta which I could dose into a paper bag to avoid leftovers, as well as a bottle of beer and an avocado. I also needed some cheese for the pasta, but did not want to get a whole block of cheese since that would be too much for me. So I got innovative and bought some easy to dose Babybel cheese.

Back at the hostel, the pasta was quickly done. Unfortunately the Babybel turned out hard to grate, so I just cut it down and added it to the pasta. Meanwhile I got to into a conversation with a Frenchman called A., with whom I talked about Astérix, the history of France and Germany, economics as well as the war in Ukraine. We had dinner together and he later wished me a good travel.

Monday, 23 May 2022

Akademy 2022 Call for Participation is open

The Call for Participation for Akademy is officially opened!

...and closes relatively shortly! Sunday the 12th of June 2022

You can find more information and summit your talk abstract here:

If you have any questions or would like to speak to the organizers, please contact

Saturday, 14 May 2022

The KDE Qt5 Patch Collection has been rebased on top of Qt 5.15.4



Commercial release announcement: 

OpenSource release announcement:


I want to personally extend my gratitude to the Commercial users of Qt for beta testing Qt 5.15.4 for the rest of us.


The Commercial Qt 5.15.4 release introduced some bugs that have later been fixed. Thanks to that, our Patchset Collection has been able to incorporate the reverts for those two bugs that affected Android and Windows and the Free Software users will never be affected by those!

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

              Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Paul Boddie's Free Software-related blog  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  English – nico.rikken’s blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nikos Roussos - opensource  Planet FSFE on  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Thoughts of a sysadmin (Posts about planet-fsfe)  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog