Free Software, Free Society!
Thoughts of the FSFE Community (English)

Monday, 19 February 2024

Lots of love from out of Hack42

“I Love Free Software Day” is a nice and positive campaign of FSFE to thank the people that enable Free Software on Valentines day. With many gatherings it is also a good opportunity to get together. In the Netherlands there was a workshop in The Hague and we had a meeting in Arnhem at hackerspace Hack42.

The meeting started with a tour for those that haven’t visisted the hackerspace before. Especially because Hack42 only recently moved to this location. Then we could start for real. First an introduction round while enjoying slices of pizza. Everybody told about their personal experiences with Free Software and about the software and people that deserve a thank you.

Group picture of attendees (except for photographer)

In this way we learned about different software: web browsers Waterfox and Firefox, browser addon Vimium, desktop environment KDE, music program Audacity, text editor Vim (in memoriam Bram Moolenaar), photo importer Rapid Photo Downloader, smartphone operating systems CalyxOS and UBports, smartphone installer OpenAndroidInstaller, catalog software Omeka, compiler Free Pascal, personal cloud environment Nextcloud, document editor LibreOffice and home automation software Home Assistant. This was an inspiring and insightful round. Remarkable was that for almost everybody Firefox was one of the first Free Software applications.

Writing of thank you’s started mostly with email and chat because most projects and developers lack a postal address. But after some research more and more handwritten I Love Free Software Day postcards ended up on the table, ready to send. It was great to see the collaboration by supporting each others cards with a signature. While were at it we also noticed some thank you’s online on social media. The animated harts by Fedora really stood out.

Written I Love Free Software postcards

It was a great evening that really brought the community together. I’m proud of the enthousiasm and kind words. Hopefully the sent thank you’s have a great impact.

I’m already looking forward to a love-filled meeting next year.

Thursday, 15 February 2024

How to deal with Wikipedia’s broken graphs and charts by avoiding Web technology escalation

Almost a year ago, a huge number of graphs and charts on Wikipedia became unviewable because a security issue had been identified in the underlying JavaScript libraries employed by the MediaWiki Graph extension, necessitating this extension’s deactivation. Since then, much effort has been expended formulating a strategy to deal with the problem, although it does not appear to have brought about any kind of workaround, let alone a solution.

The Graph extension provided a convenient way of embedding data into a MediaWiki page that would then be presented as, say, a bar chart. Since it is currently disabled on Wikipedia, the documentation fails to show what these charts looked like, but they were fairly basic, clean and not unattractive. Fortunately, the Internet Archive has a record of older Wikipedia articles, such as one relevant to this topic, and it is able to show such charts from the period before the big switch-off:

Performance evolution of the Archimedes and various competitors

Performance evolution of the Archimedes and various competitors: a chart produced by the Graph extension

The syntax for describing a chart suffered somewhat from following the style that these kinds of extensions tend to have, but it was largely tolerable. Here is an example:

{{Image frame
 | caption=Performance evolution of the Archimedes and various competitors
 | content = {{Graph:Chart
 | width=400
 | xAxisTitle=Year
 | yAxisTitle=VAX MIPS
 | legend=Product and CPU family
 | type=rect
 | x=1987,1988,1989,1990,1991,1992,1993
 | y1=2.8,2.8,2.8,10.5,13.8,13.8,15.0
 | y2=0.5,1.4,2.8,3.6,3.6,22.2,23.3
 | y3=2.1,3.4,6.6,14.7,19.2,30,40.3
 | y4=1.6,2.1,3.3,6.1,8.3,10.6,13.1
 | y1Title=Archimedes (ARM2, ARM3)
 | y2Title=Amiga (68000, 68020, 68030, 68040)
 | y3Title=Compaq Deskpro (80386, 80486, Pentium)
 | y4Title=Macintosh II, Quadra/Centris (68020, 68030, 68040)

Unfortunately, rendering this data as a collection of bars on two axes relied on a library doing all kinds of potentially amazing but largely superfluous things. And, of course, this introduced the aforementioned security issue that saw the whole facility get switched off.

After a couple of months, I decided that I wasn’t going to see my own contributions diminished by a lack of any kind of remedy, and so I did the sensible thing: use an established tool to generate charts, and upload the charts plus source data and script to Wikimedia Commons, linking the chart from the affected articles. The established tool of choice for this exercise was gnuplot.

Migrating the data was straightforward and simply involved putting the data into a simpler format. Here is an excerpt of the data file needed by gnuplot, with some items updated from the version shown above:

# Performance evolution of the Archimedes and various competitors (VAX MIPS by year)
Year    "Archimedes (ARM2, ARM3)" "Amiga (68000, 68020, 68030, 68040)" "Compaq Deskpro (80386, 80486, Pentium)" "Mac II, Quadra/Centris (68020, 68030, 68040)"
1987    2.8     0.5     2.1     1.6
1988    2.8     1.5     3.5     2.1
1989    2.8     3.0     6.6     3.3
1990    10.5    3.6     14.7    6.1
1991    13.8    3.6     19.2    8.3
1992    13.8    18.7    28.5    10.6
1993    15.1    21.6    40.3    13.1

Since gnuplot is more flexible and more capable in parsing data files, we get the opportunity to tabulate the data in a more readable way, also adding some commentary without it becoming messy. I have left out the copious comments in the actual source data file to avoid cluttering this article.

And gnuplot needs a script, requiring a little familiarisation with its script syntax. We can see that various options are required, along with axis information and some tweaks to the eventual appearance:

set terminal svg enhanced size 1280 960 font "DejaVu Sans,24"
set output 'Archimedes_performance.svg'
set title "Performance evolution of the Archimedes and various competitors"
set xlabel "Year"
set ylabel "VAX MIPS"
set yrange [0:*]
set style data histogram
set style histogram cluster gap 1
set style fill solid border -1
set key top left reverse Left
set boxwidth 0.8
set xtics scale 0
plot 'Archimedes_performance.dat' using 2:xtic(1) ti col linecolor rgb "#0080FF", '' u 3 ti col linecolor rgb "#FF8000", '' u 4 ti col linecolor rgb "#80FF80", '' u 5 ti col linecolor rgb "#FF80FF"

The result is a nice SVG file that, when uploaded to Wikimedia Commons, will be converted to other formats for inclusion in Wikipedia articles. The file can then be augmented with the data and the script in a manner that is not entirely elegant, but the result allows people to inspect the inputs and to reproduce the chart themselves. Here is the PNG file that the automation produces for embedding in Wikipedia articles:

Performance evolution of the Archimedes and various competitors

Performance evolution of the Archimedes and various competitors: a chart produced by gnuplot and converted from SVG to PNG for Wikipedia usage.

Embedding the chart in a Wikipedia article is as simple as embedding the SVG file, specifying formatting properties appropriate to the context within the article:

[[File:Archimedes performance.svg|thumb|upright=2|Performance evolution of the Archimedes and various competitors]]

The control that gnuplot provides over the appearance is far superior to that of the Graph extension, meaning that the legend in the above figure could be positioned more conveniently, for instance, and there is a helpful gallery of examples that make familiarisation and experimentation with gnuplot more accessible. So I felt rather happy and also vindicated in migrating my charts to gnuplot despite the need to invest a bit of time in the effort.

While there may be people who need the fancy JavaScript-enabled features of the currently deactivated Graph extension in their graphs and charts on Wikipedia, I suspect that many people do not. For that audience, I highly recommend migrating to gnuplot and thereby eliminating dependencies on technologies that are simply unnecessary for the application.

It would be absurd to suggest riding in a spaceship every time we wished to go to the corner shop, knowing full well that more mundane mobility techniques would suffice. Maybe we should adopt similar, proportionate measures of technology adoption and usage in other areas, if only to avoid the inconvenience of seeing solutions being withdrawn for prolonged periods without any form of relief. Perhaps, in many cases, it would be best to leave the spaceship in its hangar after all.

Wednesday, 14 February 2024

Talking more about Freedom not Less

I don’t like the term “Open Source”, because it does not refer to freedom. When a computer program is labeled as “Free” (or “Free to Play” in the case of games) we have
the same problem that “Free” often means a price of zero. By contrast games such “Tanks of Freedom” and “Freedom Saber” really respect users freedom. So I try to avoid using free
as an adjective, and use the term “Freedom” instead: Instead of saying that something is “Free Software”, I say it respects the users freedom.

I Love Free Software Day 2024

I recently did my first FOSDEM talk, about a Free Software project that I contribute to: Using the ECP5 for Libre-SOC prototyping. On the day before I met some of the GNU Guix developers. With this short blogpost I want to say a simple “Thank you” to those people I met at FOSDEM, and those who have started projects such as Libre-SOC, SlimeVR, CrazyFlie and Godot Engine.

Because I Love Free Software, I’ll started my own Free Software project called LibreVR. The FSFE’s sister organisations in North America, has a “Respects Your Freedom” certification program, and I recently have begun working on my hardware design for a wireless VR headset and will soon do regular live streams that document my work on Free Software VR games and hardware.

Monday, 12 February 2024

How does the saying go, again?

If you find yourself in a hole, stop digging? It wasn’t hard to be reminded of that when reading an assertion that a “competitive” Web browser engine needs funding to the tune of at least $100 million a year, presumably on development costs, and “really” $200-300 million.

Web browsers have come a long way since their inception. But they now feature absurdly complicated layout engines, all so that the elements on the screen can be re-jigged at a moment’s notice to adapt to arbitrary changes in the content, and yet they still fail to provide the kind of vanity publishing visuals that many Web designers seem to strive for, ceding that territory to things like PDFs (which, of course, generally provide static content). All along, the means of specifying layout either involves the supposedly elegant but hideously overcomplicated CSS, or to have scripts galore doing all the work, presumably all pounding the CPU as they do so.

So, we might legitimately wonder whether the “modern Web” is another example of technology for technology’s sake: an effort fuelled by Valley optimism and dubiously earned money that not only undermines interoperability and choice by driving out implementers who are not backed by obscene wealth, but also promotes wastefulness in needing ever more powerful systems to host ever more complicated browsers. Meanwhile, the user experience is constantly degraded: now you, the user, get to indicate whether hundreds of data surveillance companies should be allowed to track your activities under the laughable pretense of “legitimate interest”.

It is entirely justified to ask whether the constant technological churn is giving users any significant benefits or whether they could be using less sophisticated software to achieve the same results. In recent times, I have had to use the UK Government’s Web portal to initiate various processes, and one might be surprised to learn that it provides a clear, clean and generally coherent user experience. Naturally, it could be claimed that such nicely presented pages make good use of the facilities that CSS and the Web platform have to offer, but I think that it provides us with a glimpse into a parallel reality where “less” actually does deliver “more”, because reduced technological complication allows society to focus on matters of more pressing concern.

Having potentially hundreds or thousands of developers beavering away on raising the barrier to entry for delivering online applications is surely another example of how our societies’ priorities can be led astray by self-serving economic interests. We should be able to interact with online services using far simpler technology running on far more frugal devices than multi-core systems with multiple gigabytes of RAM. People used things like Minitel for a lot of the things people are doing today, for heaven’s sake. If you had told systems developers forty years ago that, in the future, instead of just connecting to a service and interacting with it, you would end up connecting to dozens of different services (Google, Facebook, random “adtech” platforms running on dark money) to let them record your habits, siphon off data, and sell you things you don’t want, they would probably have laughed in your face. We were supposed to be living on the Moon by now, were we not?

The modern Web apologist would, of course, insist that the modern browser offers so much more: video, for instance. I was reminded of this a few years ago when visiting the Oslo Airport Express Web site which, at that time, had a pointless video of the train rolling into the station behind the user interface controls, making my browser run rather slowly indeed. As an undergraduate, our group project was to design and implement a railway timetable querying system. On one occasion, our group meeting focusing on the user interface slid, as usual, into unfocused banter where one participant helpfully suggested that behind the primary user interface controls there would have to be “dancing ladies”. To which our only female group member objected, insisting that “dancing men” would also have to be an option. The discussion developed, acknowledging that a choice of dancers would first need to be offered, along with other considerations of the user demographic, even before asking the user anything about their rail journey.

Well, is that not where we are now? But instead of being asked personal questions, a bunch of voyeurs have been watching your every move online and have already deduced the answers to those questions and others. Then, a useless video and random developer excess drains away your computer’s interactivity as you type into treacle, trying to get a sensible result from a potentially unhelpful and otherwise underdeveloped service. How is that hole coming along, again?

Saturday, 10 February 2024

Plucker/Palm support removed from Okular for 24.05

We recently remove the Plucker/Palm support in Okular, because it was unmaintained and we didn't even find [m]any suitable file to test it.

If you are using it, you have a few months to step up and bring it back, if not, let's have it rest.

Monday, 29 January 2024

Self-hosted media center


This is a typical documentation post on how to set up a stack of open source tools to create a media center at home. That involves not just the frontend, that you can use on your TV or other devices, but also the tools needed for monitoring the release of certain movies and tv shows.

By the time you reach the end of the post and look at the code you will be wondering "is it worth the time?". I had the same reservations when I started looking to all these tools and it's definitely something to consider. But they do simplify a lot of the tasks that you probably do manually now. And in the end, you get an interface that has a similar user experience as many commercial streaming services do.

To minimize the effort of installing all this software and reducing future maintenance, you can use docker containers. The project has done some amazing work on this area, providing pre-built container images. They definitely worth your support if you can afford donating.


  • Movies: Radarr
  • TV Shows: Sonarr
  • Torrents: Transmission. This is probably the only part of the whole stack that you have the flexibility to choose between various options.
  • Indexer: Jackett. That works as a proxy that translates queries from all the other apps into torrent trackers http queries, parses the html or json response, and then sends results back to the requesting software (Transmission in this case).
  • Subtitles: Bazarr
  • Media Center: Jellyfin

Docker Compose

Below I include a docker compose file that will make everything work together. Some prerequisites that you need to take care of:

  • Create a new user that would be the one running these docker containers.
  • Depending on your Linux distribution, you many need to add this user to the docker group.
  • Switch to that use and run id. Use the numeric values from uid and guid to replace the values for PUID and PGID respectively in the compose file below.
  • All containers need to share a volume for all the media (see the volumes configuration at the bottom of the file). Hardlinks are being used then to avoid duplicating files or doing unnecessary file transfers. For a more detailed explanation see Radarr's documentation.
  • If you live in a country that censors Torrent trackers, you need to override DNS settings at least for the Jackett service. The example below is using RadicalDNS for that purpose.
  • Adjust the volume paths to your preference. The example is using /data for configuration directories per app and /data/public for the actual media.
  • Save this file as docker-compose.yml.

version: "3.7"

    container_name: transmission
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - UMASK=002
      - USER= #optional
      - PASS= #optional
      - /data/transmission:/config
      - data:/data
      - 9091:9091
      - 51413:51413
      - 51413:51413/udp
    restart: unless-stopped

    container_name: sonarr
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - UMASK=002
      - /data/sonarr:/config
      - data:/data
      - 8989:8989
    restart: unless-stopped

    container_name: radarr
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - UMASK=002
      - /data/radarr:/config
      - data:/data
      - 7878:7878
    restart: unless-stopped

    container_name: jackett
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - UMASK=022
      - /data/jackett:/config
      - data:/data
      - 9117:9117
    restart: unless-stopped

    container_name: bazarr
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - UMASK=022
      - /data/bazarr:/config
      - data:/data
      - 6767:6767
    restart: unless-stopped

    container_name: jellyfin
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - UMASK=022
      - JELLYFIN_PublishedServerUrl= #optional
      - /data/jellyfin:/config
      - data:/data
      - 8096:8096
    restart: unless-stopped

    driver: local
      type: none
      device: /data/public
      o: bind


To make it easier accessing all those services Nginx can be used to map ports exposed by docker under the same domain. You can of course just use your server's IP address, but having a domain name can also make it easier for other people who are not as good as you in memorizing IP addresses (I know right?).

Although it may not considered a good practice to point an external domain to an internal IP, it be very convenient in this use case since it allows you to issue a valid and free SSL certificate using Let's Encrypt.

Below is a simple Nginx configuration that can work together with the docker compose setup described above.

upstream transmission {
  keepalive 4;

upstream sonarr {
  keepalive 4;

upstream radarr {
  keepalive 4;

upstream jackett {
  keepalive 4;

upstream bazarr {
  keepalive 4;

upstream jellyfin {
  keepalive 4;

server {
  listen 80;
  listen [::]:80;
  return 301 https://$server_name$request_uri;

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  ssl_certificate "/etc/certs/acme/fullchain.cer";
  ssl_certificate_key "/etc/certs/acme/";

  location /radarr {
    include /etc/nginx/snippets/proxy_pass.conf;
    proxy_pass http://radarr;

  location /sonarr {
    include /etc/nginx/snippets/proxy_pass.conf;
    proxy_pass http://sonarr;

    location /jackett {
    include /etc/nginx/snippets/proxy_pass.conf;
    proxy_pass http://jackett;

  location /bazarr {
    include /etc/nginx/snippets/proxy_pass.conf;
    proxy_pass http://bazarr;

  location /transmission {
    include /etc/nginx/snippets/proxy_pass.conf;
    proxy_pass_header X-Transmission-Session-Id;
    proxy_pass http://transmission;

  location / {
    include /etc/nginx/snippets/proxy_pass.conf;
    proxy_pass http://jellyfin;

Some things to take care of:

  • Replace the server name with yours.
  • With the exception of Jellyfin, all other services are served from a path. You may need to adjust the application settings after first run to make this work. As an example, Radarr will need a <UrlBase>/radarr</UrlBase> at its config.xml.
  • Since proxy_pass options are the same for all services, there is an include directive pointing to the snippet below.

proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;


Since the subdomain will be pointing to an internal IP it can be difficult to use the http challenge to get a certificate. Instead, you can use that supports many DNS providers and can automate the DNS challenge verification.

Here is an example command for issuing a certificate for the first time, using Cloudflare DNS: --debug --issue --dns dns_cf -d --dnssleep 300

You will need to make that Nginx configuration points to the certificates created by

Run it!

All you have to do is bring docker containers up. Switch to the user you created for that purpose and go to the directory you saved docker-compose.yml:

docker-compose up -d

As root you should also start Nginx:

systemctl enable --now nginx.service

And that's it!


Some post-installation configuration to make everything work together:

  • As mentioned above, make sure to adjust "URL base" and use the location path configured in Nginx (eg. /sonarr for Sonarr) in all the applications.
  • Whatever torrent client you choose, make sure to configure it for both Radarr and Sonarr as a Download Client under their Settings options.
  • On Transmission, you can choose "Require Encryption" in Preferences > Peers > Encryption mode. You will probably lose some peers, but you'll prevent your ISP from knowing what content you are downloading.
  • After you add some torrent trackers to Jackett, you would also need to configure Indexers under Settings options in both Sonarr and Radarr. You should copy the Torznab feed from Jackett and its API key to make it work.
  • For subtitles, you need first to add some Providers in Bazarr Settings options. And then create at least one Language Profile under Languages, so that Bazarr knows what languages to look for.
  • Both Sonarr and Radarr support importing existing media files and they provide some on-screen instructions on how to structure your files in a way they understand.

Future maintainance

Upgrading the whole stack is just two commands:

docker-compose pull
docker-compose restart

You can also make a systemd service to run the docker containers on boot. It also helps if you want to check logs and you are familiar with journald. Here is a simple service file:

Description=Media Center

ExecStart=docker-compose up -d
ExecReload=docker-compose restart
ExecStop=docker-compose stop
  • Make sure to replace username and group with your settings.
  • Create this file inside /etc/systemd/system/

Reload systemd to view the new service file and run and activate the service:

systemctl daemon-reload
systemctl enable --now mediacenter.service


Friday, 26 January 2024

Slow but Gradual L4Re Progress

It seems a bit self-indulgent to write up some of the things I have been doing lately, but I suppose it helps to keep track of progress since the start of the year. Having taken some time off, it took a while to get back into the routine, familiarise myself with my L4Re efforts, and to actually achieve something.

The Dry, Low-Level Review of Mistakes Made

Two things conspired to obstruct progress for a while, both related to the way I handle interprocess communication (IPC) in L4Re. As I may have mentioned before, I don’t use the L4Re framework’s own IPC libraries because I find them either opaque or cumbersome. However, that puts the burden on me to get my own libraries and tools right, which I failed to do. The offending area of functionality was that of message items which are used to communicate object capabilities and to map memory between tasks.

One obstacle involved memory mapping. Since I had evolved my own libraries gradually as my own understanding evolved, I had decided to allocate a capability for every item received in a message. Unfortunately, when I introduced my own program execution mechanism, when one of the components (the region mapper) would be making its own requests for memory, I had overlooked that it would be receiving flexpages – an abstraction for mapped memory – and would not need to allocate a capability for each such item received. So, very quickly, the number of capabilities became exhausted for that component. The fix for this was fairly straightforward: just don’t allocate new capabilities in cases where flexpages are to be received.

The other obstacle involved the assignment of received message items. When a thread receives items, it should have declared how they should be assigned to capabilities by putting capability indexes into what are known as buffer registers (although they are really just an array in memory, in practice). A message transmitting items will cause the kernel to associate those items with the declared capability indexes, and then the receiving thread will itself record the capability indexes for its own purposes. What I had overlooked was that if, say, two items might be expected but if the first of these is “void” or effectively not transmitting a capability, the kernel does not skip the index in the buffer register that might be associated with that expected capability. Instead, it assigns that index to the next valid or non-void capability in the message.

Since my code had assumed that the kernel would associate declared capability indexes with items based on their positions in messages, I was discovering that my programs’ view of the capability assignments differed from that of the kernel, and so operations on the capabilities they believed to be valid were failing. The fix for this was also fairly straightforward: consume declared capability indexes in order, not skipping any of them, regardless of which items in the message eventually get associated with them.

Some Slightly More Tangible Results

After fixing things up, I started to make a bit more progress. I had wanted to take advantage of a bit more interactivity when testing the software, learning from experiences developing low-level software for various single-board computers. I also wanted to get programs to communicate via pipes. Previously, I had managed to get them to use an output pipe instead of just outputting to the console via the “log” capability, but now I also wanted to be able to present those pipes to other programs as those programs’ input pipes.

Getting programs to use pipes would allow them to be used to process, inspect and validate the output of other programs, hopefully helping with testing and validation of program behaviour. I already had a test program that was able to execute operations on the filesystem, and so it seemed like a reasonable idea to extend this to allow it to be able to run programs from the filesystem, too. Once I solved some of the problems I had previously created for myself, this test program started to behave a bit more like a shell.

The following potentially confusing transcript shows a program being launched to show the contents of a text file. Here, I have borrowed a command name from VMS – an operating system I probably used only a handful of times in the early 1990s – although “spawn” is a pretty generic term, widely used in a similar sense throughout modern computing. The output of the program is piped to another program whose role is to “clip” a collection of lines from a file or, as is the case here, an input stream and to send those lines to its output pipe. Waiting for this program to complete yields the extracted lines.

> spawn bin/cat home/paulb/LICENCE.txt
[0]+ bin/cat [!]
> pipe + bin/clip - 5 5
> jobs
[0]  bin/cat
[1]+ bin/clip [!]
> wait 1
Completed with signal 0 value 0
 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

Completed with signal 0 value 0
> jobs

Obviously, this is very rudimentary, but it should be somewhat useful for testing. I don’t want to get into writing an actual shell because this would be a huge task in itself, apparent when considering the operation of the commands illustrated above. The aim will be to port a shell once the underlying library functionality is mature enough. Still, I think it would be an amusing and a tantalising prospect to define one’s own shell environment.

Sunday, 21 January 2024

Keeping the Flame Alive

Like my previous entry, it looks like I'm starting this year by noting that I looked at microcontrollers in 2023 but with not much activity visible on this site. Almost all the public activity was in my Inferno diary, though I also produced a submission for the 9th International Workshop on Plan 9 which I was unfortunately unable to attend in person.

The rant

One reason for being physically absent from IWP9 was work commitments, involving the ordeal of having to prepare for travel to a work event. This involved far too much preparation for very little return, including a booster vaccination that maybe I needed, but which my “employer” was characteristically vague about. In the end, the work event was a pointless exercise in the sort of performative corporate busyworking that even Nokia never managed to slip into during the years I was an employee there. Yes, an actual employee, not a self-employed contractor for a remote-first, try-to-appear-big, overgrown start-up pretending to be a proper corporation.

At least I met a few interesting colleagues in person before the end of my unproductive work experience four months later. Another section to slap on my CV that requires explaining to future employers.

The better things

More productive and interesting things also happened in 2023. As well as virtually attending and presenting at IWP9, I managed to port Inferno to more MicroMod boards, including the SAMD51, Artemis (Apollo3) and Teensy.

Nearer the end of the year, I started to automate builds of these ports, and others, with the results published here. I'm now looking at documenting the way these ports work in this repository. Hopefully, this will make Inferno porting easier for others to approach.

Categories: Inferno, Free Software

Thursday, 11 January 2024

KDE Gear 24.02 branches created

Make sure you commit anything you want to end up in the KDE Gear 24.02 releases to them

Next Dates:

  •    January 31: 24.02 RC 2 (24.01.95) Tagging and Release
  •   February 21: 24.02 Tagging
  •   February 28: 24.02 Release

Friday, 05 January 2024

37c3 notes


It’s been a few years since the last Chaos Computer Congress. Same as many other people I highly enjoyed being there. Meeting with people, participating in discussions and a bit of hacking. Most of the things taking place in a congress are quite difficult to describe them in writing and most of happening outside of the presentation rooms. But stil, I thought I should share at least some sessions I enjoyed.

💡 If you use Kodi, install the relevant add-on to watch these in comfort (or any other of the apps)


  • Predator Files: How European spyware threatens civil society around the world
    A technical deep dive into Amnesty International investigation about the spyware alliance Intellexa, which is used by governments to infect the devices and infrastructure we all depend on.

  • Tech(no)fixes beware!
    Spotting (digital) tech fictions as replacement for social and political change. As the climate catastrophe is imminent and global injustice is rising, a lot of new tech is supposed to help the transition to a sustainable society. Although some of them can actually help with parts of the transition, they are usually discussed not as tools to assist the broader societal change but as replacement for the broader societal change.

  • A Libyan Militia and the EU - A Love Story?
    An open source investigation by Sea-Watch and other organizations, on how EU (either directly or through Frontex) is collaborating with Tariq Ben Zeyad Brigade (TBZ), a notorious East Libyan land-based militia. TBZ were deeply involved in the failed passage of the boat that sank near Pylos, in which up to 500 people drowned.

  • Tractors, Rockets and the Internet in Belarus
    How belarusian authoritarian regime is using technologies to repress it's population. With dropping costs of surveillance smaller authoritarian regimes are gaining easier access to different "out of the box" security solutions used mainly to further oppress people.

  • Please Identify Yourself!
    Focused mostly on EU's eIDAS and India's Aadhaar, and highlighting how Digital Identity Systems proliferate worldwide without any regard for their human rights impact or privacy concerns. Driven by governments and the crony capitalist solutionism peddled by the private sector, these identification systems are a frontal attack on anonymity in the online world and might lead to completely new forms of tracking and discrimination.

  • On Digitalisation, Sustainability & Climate Justice
    A critical talk about sustainability, technology, society, growth and ways ahead. Which digital tools make sense, which do not and how can we achieve global social emancipation from self-destructive structures and towards ecological sustainability and a and a just world?

  • Energy Consumption of Datacenters
    The increase of datacenter energy consumption has already been exponential for years. With the AI hype, this demand for energy, cooling and water has increased dramatically.

  • Software Licensing For A Circular Economy
    How Open Source Software connects to sustainability, based on the KDE Eco initiative. Concrete examples on Free & Open Source Software license can disrupt the produce-use-dispose linear model of hardware consumption and enable the shift to a reduce-reuse-recycle circular model.

Self-organized sessions

Anyone who has paricipated in a Congress knows that there is a wide variety of workshops and self-organized sessions outside of the official curated talks. Most of them not recorded, but still I thought I should share some highlights and thoughts in case people want to dig a bit deeper into these topics.


Some quick links on projects captured in my notes based on discussions during the Congress.

Friday, 15 December 2023

Ada & Zangemann in French, Support by Minister of Education and Fan Art

As you might have seen, the French version "Ada & Zangemann - Un conte sur les logiciels, le skateboard et la glace à la framboise" is now available from C & F éditions. On their website, you can also get access to the e-book, which thanks to the "French Department of Digital for Education" is available free of charge. Many things happened around the French version.

On 4 December 2023, the French Minister of Education, Gabriel Attal, presented the book "Ada & #Zangemann" to the APFA parliamentarians meeting in the old Bonn parliament. Afterwards he gifted the book to Anke Rehlinger, Minister President of the Saarland (Germany). Below you see the French video (thanks Erik da Silva and Dario Presutti with several subtitles).

The book was covered in a radio interview by "Radio France", in several articles, including Le Monde, ZDnet, or blogs, as well as TV coverage at Sqool TV with Alexis Kauffmann, from the French Ministry of Education and the person who started with the idea of the French translation.

I was honoured that David Revoy painted this great version of Ada, published under Creative Commons by share alike as well (high resolution and source files also available).

Ada from "Ada & Zangemann" by David Revoy, based on "Ada & Zangemann" written by Matthias Kirschner and illustrated by Sandra Brandstätter − CC-BY-SA 4.0

David is doing really great illustration; with his comic Pepper & Carrot as well as with other great work for example for the French organisation Framasoft to promote software freedom. He also publishes his work under the Creative Commons by share alike license.

Furthermore, if you are interested how to create artwork with Free Software, with tools like Krita, also check out his website as he is publishing a lot of tutorials and tools. Below you see his recording of the progress of the Ada illustration which he published on the fediverse.

Rayan (Alès), Matteo (Besançon), Rozenn (Guingamp), Louna (Paris)... more than a hundred students, aged 13 to 19, from four different schools in France, translated this book from German into French over the course of the 2022-2023 school year, sharing the work and coordinating it using online tools.

On 7 December, Sandra Brandstätter, the illustrator, and I were invited to participate in an online meeting organised by the French Ministry of Education, with several of the pupils attending, who translated the book. It was amazing to hear that they have spent several weeks on the project; including writing their own story when they have just seen the illustrations of the book, discussing the characters Ada and Zangemann, the connection of the story to real world development, and of course the translation itself. It was great to have the chance to be there with them and thank some of them personally.

Participants of the meeting with teachers and pupils from different school classes

I would like to thank all the people who help to promote the book. Especially, thank you to Alexis Kauffmann from the French Ministry of Education and founder of Framasoft for initialising this project, C & F éditions for publishing it, ADEAF (Association pour le développement de l'enseignement de l'allemand en France) for coordinating the project, the teachers for spending so much time and energy into the project, and most importantly all the pupils who did such a great work!

Thursday, 14 December 2023

Revisiting L4Re System Development Efforts

I had been meaning to return to my investigations into L4Re, running programs in a configurable environment, and trying to evolve some kind of minimal computing environment, but other efforts and obligations intervened and rather delayed such plans. Some of those other efforts had been informative in their own way, though, giving me a bit more confidence that I might one day get to where I want to be with all of this.

For example, experimenting with various hardware devices had involved writing an interactive program that allows inspection of the low-level hardware configuration. Booting straight after U-Boot, which itself provides a level of interactive support for inspecting the state of the hardware, this program (unlike a weighty Linux payload) facilitates a fairly rapid, iterative process of developing and testing device driver routines. I had believed that such interactivity via the text console was more limited in L4Re, and so this opens up some useful possibilities.

But as for my previous work paging in filesystem content and running programs from the filesystem, it had been deferred to a later point in time with fewer distractions and potentially a bit more motivation on my part, particularly since it can take a while to be fully reacquainted with a piece of work with lots of little details that are easily forgotten. Fortuitously, this later moment in time arrived in conjunction with an e-mail I received asking about some of the mechanisms in L4Re involved with precisely the kinds of activities I had been investigating.

Now, I personally do not regard myself as any kind of expert on L4Re and its peculiarities: after a few years of tinkering, I still feel like I am discovering new aspects of the software and its design, encountering its limitations in forms that may be understandable, excusable, both, or neither of these things. So, I doubt that I am any kind of expert, particularly as I feel like I am muddling along trying to implement something sensible myself.

However, I do appreciate that I am possibly the only person publicly describing work of this nature involving L4Re, which is quite unfortunate from a technology adoption perspective. It may not matter one bit to those writing code for and around L4Re professionally whether anyone talks about the technology publicly, and there may be plenty of money to be made conducting business as usual for such matters to be of any concern whatsoever, but history suggests that technologies have better chances of success (and even survival) if they are grounded in a broader public awareness.

So, I took a bit of time trying to make sense out of what I already did, this work being conducted most intensively earlier in the year, and tried to summarise it in a coherent fashion. Hopefully, there were a few things of relevance in that summary that benefited my correspondent and their own activities. In any case, I welcome any opportunity to constructively discuss my work, because it often gives me a certain impetus to return to it and an element of motivation in knowing that it might have some value to others.

I am grateful to my correspondent for initiating this exercise as it required me to familiarise myself with many of the different aspects of my past efforts, helping me to largely pick up where I had left off. In that respect, I had pretty much reached a point of demonstrating the launching of new programs, and at the time I had wanted to declare some kind of success before parking the work for a later time. However, I knew that some tidying up would be required in some areas, and there were some features that I had wanted to introduce, but I had felt that more time and energy needed to be accumulated before facing down the implementation of those features.

The first feature I had in mind was that of plumbing programs or processes together using pipes. Since I want to improve testing of this software, and since this might be done effectively by combining programs, having some programs do work and others assess the output produced in doing this work, connecting programs using pipes in the Unix tradition seems like a reasonable approach. In L4Re, programs tend to write their output to a “log” capability which can be consumed by other programs or directed towards the console output facility, but the functionality seems quite minimal and does not seem to lend itself readily to integration with my filesystem framework.

Previously, I had implemented a pipe mechanism using shared memory to transfer data through pipes, this being to support things like directory listings yielding the contents of filesystem directories. Consequently, I had the functionality available to be able to conveniently create pipes and to pass their endpoints to other tasks and threads. It therefore seemed possible that I might create a pipe when creating a new process, passing one endpoint to the new process for it to use as its output stream, retaining the other endpoint to consume that output.

Having reviewed my process creation mechanisms, I determined that I would need to modify them so that the component involved – a process server – would accept an output capability, supplying it to a new process in its environment and “mapping” the capability into the task created for the process. Then, the program to be run in the process would need to extract the capability from its environment and use it as an output stream instead of the conventional L4Re output functionality, this being provided by L4Re’s native C library. Meanwhile, any process creating another would need to monitor its own endpoint for any data emitted by the new process, also potentially checking for a signal from the new process in the event of it terminating.

Much of this was fairly straightforward work, but there was some frustration in dealing with the lifecycles of various components and capabilities. For example, it is desirable to be able to have the creating process just perform a blocking read over and over again on the reading endpoint of the pipe, only stopping when the endpoint is closed, with this closure occurring when the created process terminates.

But there were some problems with getting the writing endpoint of the pipe to be discarded by the created process, even if I made the program being run explicitly discard or “unmap” the endpoint capability. It turned out that L4Re’s capability allocator is not entirely useful when dealing with capabilities acquired from the environment, and the task API is needed to do the unmapping job. Eventually, some success was eventually experienced: a test program could now launch another and consume the output produced, echoing it to the console.

The next step, of course, is to support input streams to created processes and to potentially consider the provision of an arbitary number of streams, as opposed to prescribing a fixed number of “standard” streams. Beyond that, I need to return to introducing a C library that supports my framework. I did this once for an earlier incarnation of this effort, putting Newlib on top of my own libraries and mechanisms. On this occasion, it might make sense to introduce Newlib initially only for programs that are launched within my own framework, letting them use C library functions that employ these input and output streams instead of calling lower-level functions.

One significant motivation for getting program launching working in the first place was to finally make Newlib usable in a broad sense, completing coverage of the system calls underpinning the library (as noted in its documentation) not merely by supporting low-level file operations like open, close, read and write, but also by supporting process-related operations such as execve, fork and wait. Whether fork and the semantics of execve are worth supporting is another matter, however, these being POSIX-related functions, and perhaps something like the system function (in stdlib.h, part of the portable C process control functions) would be adequate for portable programs.

In any case, the work will continue, hopefully at a slightly quicker pace as the functionality accumulates, with existing features hopefully making new features easier to formulate and to add. And hopefully, I will be able to dedicate a bit more time and attention to it in the coming year, too.

Sunday, 10 December 2023

🌓 Commandline dark-mode switching for Qt, GTK and websites 🌓

This post documents how to toggle your entire desktop between light and dark themes, including your apps and the websites in your browser.


Like many other people, I use my computer(s) with varying degrees of ambient light. When there is lots of light, I want a bright theme, but in the evenings, I prefer a dark theme. Switching this for Firefox and several toolkits manually almost drove me crazy, so I will document here how I automated the entire process.

I use the Sway window manager which makes things a bit more difficult, because neither the UI unification mechanisms of GNOME nor KDE automatically kick in. I use Firefox as a browser, and I also want the websites to switch themes. And of course, I want the theme switch to be applied immediately and not just on restartet apps.


This is what it looks like when it’s done. Dolphin (KDE5), Firefox, the website inside Firefox, and GEdit (GTK) all switch together.

Primary script


current=$(gsettings get org.gnome.desktop.interface color-scheme)

if [ "${current}" != "'prefer-dark'" ]; then #default

    echo "Switching to dark."
    gsettings set org.gnome.desktop.interface color-scheme prefer-dark
    gsettings set org.gnome.desktop.interface gtk-theme Adwaita-dark
    gsettings set org.gnome.desktop.interface icon-theme breeze-dark

else # already dark

    echo "Switching to light."
    gsettings set org.gnome.desktop.interface color-scheme default
    gsettings set org.gnome.desktop.interface gtk-theme Adwaita
    gsettings set org.gnome.desktop.interface icon-theme breeze


This is the primary script. It works by manipulating the gsettings, so we will have to make everything else follow these settings. The script operates in a toggle-mode, i.e. running it repeatedly switches between light and dark. I had hoped that the color-scheme preference would be the only thing needing change, but the gtk-theme needs to also be switched explicitly. I am not aware of any theme other than Adwaita that works on all toolkits.

Switching the icon-theme is not necessary, but recommended. To get a list of installed icon themes, ls /usr/share/icons.


This is the list of packages I installed on Ubuntu 23.10. Note that if you miss certain packages, things will not work without telling you why. I started this install with a Kubuntu ISO, so depending on your setup, you might need to install more packages, e.g. libglib2.0-bin provides the gsettings binary.

Package list:

libadwaita                  # GTK3 theme (auto-installed)
adwaita-qt                  # Qt5 theme
adwaita-qt6                 # Qt6 theme
gnome-themes-extra          # GTK2 theme
gnome-themes-extra-data     # GTK2 theme and GTK3 dark theme support
qgnomeplatform-qt5          # Needed to tell Qt5 and KDE to use gsettings
#qgnomeplatform-qt6         # If your distro has it

I am not exactly sure where the GTK4 theme comes from, and I have no app to test that. If you want to use the breeze icon theme, also install breeze-icon-theme.


GTK apps

You should already be able to switch GTK apps by running the script. Give it a try!

Firefox app

Firefox should also switch its own theme after invoking the script. If it does not, check the following:

  • Your XDG session is treated by Firefox as being GNOME or something similiar.1
  • Go to about:addons, then “Themes” and make sure you have selected “System-Theme (automatic)”. 2
  • Go to about:support and look for “Windows Protocol”. It should list wayland. If it does not, restart your Firefox with MOZ_ENABLE_WAYLAND=1 set in the environment.
  • Go to about:support and look for “Operating System theme”. It should list Adwaita / Adwaita. If it does not, you are likely missing some crucial packages.
  • Double-check the gnome-themes-extra or similar packages on your distro. I didn’t have these initially and it prevented Firefox from picking up the theme.

I haven’t tried any of this with Chromium, but I might at some point in the future.

Firefox (websites)

Next are the websites inside Firefox. Make sure your Firefox propagates its own theme settings to its websites:

Websites like should now respect your system’s theme. However, running our script will not affect open tabs; you need to reload the tab or open a new tabe to see the effects.

Many other sites do not have a dark theme, though, or do not apply it automatically. To change these sites, install the great dark reader firefox plugin!

Configure the plugin for automatic bahaviour based on the system colours (as shown above). Now is the time to test the script again! Websites controlled by Dark Reader should update immediately without a refresh. This is one reason to prefer Dark Reader’s handling over native switching (like that of ).3 If this is the behaviour you want, make sure that websites are not disabled-by-default in Dark Reader (configurable through the small ⚙ under the website URL in the plugin-popup); this is the case for e.g.

An option you might want to play with is found under “more → Change Browser Theme”. This makes the plugin control the Firefox application theme. This is a bit of a logic loop (script changes Firefox theme → triggers Plugin → triggers update of theme), but it often works well and usually gives a slightly different “dark theme look” for the application.

Qt and KDE apps

There are multiple ways to make Qt5 and KDE apps look like GTK apps:

  1. Select the Adwaita / Adwaita-Dark theme as the native Qt theme (QT_QPA_PLATFORMTHEME=Adwaita / QT_QPA_PLATFORMTHEME=Adwaita-dark)
  2. Select “gtk2” as the native Qt theme (QT_QPA_PLATFORMTHEME=gtk2)
  3. Select “gnome” as the native Qt theme (QT_QPA_PLATFORMTHEME=gnome)

All of these work to a certain degree, and I would have liked to use first option. But for neither 1. nor 2., I was able to achieve “live switching” of already open applications upon invocation of the script. In theory, one should also be able to use KDE’s lookandfeeltool to switch between the native Adwaita and Adwaita-dark themes (or any other pair of themes), but I was not able to make this work reliably.3

Note that for Qt6 applications to switch theme with the rest, qgnomeplatform-qt6 needs to be installed, which is not available on Ubuntu. Other platforms (like Arch’s AUR) seemed to have it, though.

Note also that in Sway you need to make sure that QT_QPA_PLATFORMTHEME is defined in the context where your applications are started. This is typically not the case within the sway config, so I do the following:

bindsym $mod+space exec /home/hannes/bin/preload_profile krunner

Where preload_profile executes all arguments given to it, but imports ~/.profile before.

Possible extensions

Screen brightness

Before I managed to setup theme-switching correctly, I used a script to control brightness. Now that theme switching works, I don’t do this anymore, but in case you want this additionally:

  • You can use brightnessctl to adjust the brightness of the built-in screen of your Laptop.
  • You can use ddcutil to adjust the brightness of an external monitor (this affects actual display brightness not Gamma).


If desired, you could automate theme switching with cron or map hotkeys to the script.

Closing remarks

I am really happy I got this far; the only thing that does not update live is the icon theme in KDE applications. If anyone has advice on that, I would be grateful!

I have used the method of having everything behave like being on GNOME here. In theory, it should also be possible to set XDG portal to kde and use lookandfeeltool instead of gsettings, but I did not yet manage to make that work.If you have, please let me know!

  1. I have verified that an XDG desktop portal of wlr or gtk works, and also that a value of kde does not work; so this won’t work within a KDE session. ↩︎

  2. If you get spurious flashes of white between website loading or tab-switches, you can later switch this to the “Dark” theme and it should still turn bright when in global bright mode. ↩︎

  3. Native dark themes may or may not look better than whatever Dark Reader is doing. I often prefer Dark Reader, because it allows backgrounds that are not fully black. ↩︎ ↩︎

Friday, 08 December 2023

Avoiding nonfree DRM’d Games on GNU/Linux – International Day Against DRM

As a proud user of an FSF-certified Talos II Mainboard and some Rockchip SBCs, I find that is has become easier to avoid using Stream, Valve’s platform for distibution of nonfree computer games with Digital Restrictions Management.

Since I cannot (and don’t want to) play any of the non-free games from Steam, I have begun developing my own games that bring freedom to the users. Some of those games are “clones” of popular nonfree VR games such as Beat Saber and VRChat.

I’m also going to sell copies of those games and hardware that I am currently working on. The games are copylefted Free Software and the hardware is designed with the Respects Your Freedom Certification in mind.

For me there is an ethical imperative to make the game art and hardware designs free too. I don’t think that those things have to be copylefted, as most GNU software is. The distribution service/site must also be ethical, which means that it is not SaaSS and does not send any non-free JavaScript.

I also plan to provide Windows binaries, cross compiled using MinGW and tested on Proton on my Opteron system. My goal here is giving users of Windows a taste of freedom.

I replaced Windows with GNU/Linux long time ago and want to encourage gamers to do the same. The first free game that I had on a Windows 3.1 as a child was GNU Chess. At that time I never heard about Linux and did not know what GNU is. But I started learning to program and wanted to make use of freedom 1.

Today I use GNU Guix which can run on any GNU/Linux distro and even Android. No nonfree software is needed to run libsurvive and spreadgine, so both can be included in Guix. Instead of Steam, I now use Guix for gaming.

When games included in Guix respect Freedom, this does not mean that users do not have to pay to play the games. Guix has substitutes for local builds, and users could either pay for those substitutes or build the game locally. Even when the artwork is non-free, downloading the artwork could be done without running any non-free javascript or other proprietary malware. The FSF* could run Crowdfunding campaings for freedom respecting games and host game servers on hardware that has been RYF certified.

People often think it is not feasible in the current situation to develop a free replacement for some of the most popular nonfree VR games including “VRChat”. But projects such as V-SekaiV-Sekai have proven that this is not the case, free games can be developed and users who value freedom will only play those free games and reject the nonfree games.

Since I want to promote the cause of freedom in gaming, I am settung up a website which lists only libre games that can run on GNU/Linux and/or liberated
consoles. The page includes integration for GNU Taler so that users can donate or buy games and/or RYFed gaming hardware, including a future Guix Deck.

Firefox and Monospaced Fonts

This has been going on for years, but a recent upgrade brought it to my attention and it rather says everything about what is wrong with the way technology is supposedly improved. If you define a style for your Web pages using a monospaced font like Courier, Firefox still decides to convert letter pairs like “fi” and “fl” to ligatures. In other words, it squashes the two letters together into a single character.

Now, I suppose that it does this in such a way that the resulting ligature only occupies the space of a single character, thereby not introducing proportional spacing that would disrupt the alignment of characters across lines, but it does manage to disrupt the distribution of characters and potentially the correspondence of characters between lines. Worst of all, though, this enforced conversion is just ugly.

Here is what WordPress seems to format without suffering from this problem, by explicitly using the “monospace” font-style identifier:

long client_flush(file_t *file);

And here is what happens when Courier is chosen as the font:

long client_flush(file_t *file);

In case theming, browser behaviour, and other factors obscure the effect I am attempting to illustrate, here it is with the ligatures deliberately introduced:

long client_flush(file_t *file);

In fact, the automatic ligatures do remain as two distinct letters crammed into a single space whereas I had to go and find the actual ligatures in LibreOffice’s “special character” dialogue to paste into the example above. One might argue that by keeping the letters distinct, it preserves the original text so that it can be copied and pasted back into a suitable environment, like a program source file or an interactive prompt or shell. But still, when the effect being sought is not entirely obtained, why is anyone actually bothering to do this?

It seems to me that this is yet another example of “design” indoctrination courtesy of the products of companies like Apple and Adobe, combined with the aesthetics-is-everything mentality that values style over substance. How awful it is that someone may put the letter “f” next to the letter “i” or “l” without pulling them closer together and using stylish typographic constructs!

Naturally, someone may jump to the defence of the practice being described here, claiming that what is really happening is kerning, as if someone like me might not have heard of it. Unfortunately for them, I spent quite a bit of time in the early 1990s – quite possibly before some of today’s “design” gurus were born – learning about desktop publishing and typography (for a system that had a coherent outline font system before platforms like the Macintosh and Windows did). Generally, you don’t tend to apply kerning to monospaced fonts like Courier: the big hint is the “monospaced” bit.

Apparently, the reason for this behaviour is something to do with the font library being used and it will apparently be fixed in future Firefox releases, or at least ones later than the one I happen to be using in Debian. Workarounds using configuration files reminiscent of the early 2000s Linux desktop experience apparently exist, although I don’t think they really work.

But anyway, well done to everyone responsible for this mess, whether it was someone’s great typographic “design” vision being imposed on everyone else, or whether it was just that yet more technologies were thrown into the big cauldron and stirred around without any consideration of the consequences. I am sure yet more ingredients will be thrown in to mask the unpleasant taste, also conspiring to make all our computers run more slowly.

Sometimes I think that “modern Web” platform architects have it as their overriding goal to reproduce the publishing solutions of twenty to thirty years ago using hardware hundreds or even thousands of times more powerful, yet delivering something that runs even slower and still producing comparatively mediocre results. As if the aim is to deliver something akin to a turn-of-the-century Condé Nast publication on the Web with gigabytes of JavaScript.

But maybe, at least for the annoyance described here, the lesson is that if something is barely worth doing, largely because it is probably only addressing someone’s offended sense of aesthetics, maybe just don’t bother doing it. There are, after all, plenty of other things in the realm of technology and beyond that more legitimately demand humanity’s attention.

Saturday, 18 November 2023

Experiments with a Screen

Not much to report, really. Plenty of ongoing effort to overhaul my L4Re-based software support for the MIPS-based Ingenic SoC products, plus the slow resumption of some kind of focus on my more general framework to provide a demand-paged system on top of L4Re. And then various distractions and obligations on top of that.

Anyway, here is a picture of some kind of result:

MIPS Creator CI20 and Pirate Audio Mini Speaker board

The MIPS Creator CI20 driving the Pirate Audio Mini Speaker board’s screen.

It shows the MIPS Creator CI20 using a Raspberry Pi “hat”, driving the screen using the SPI peripheral built into the CI20’s JZ4780 SoC. Although the original Raspberry Pi had a 26-pin expansion header that the CI20 adopted for compatibility, the Pi range then adopted a 40-pin header instead. Hopefully, there weren’t too many unhappy vendors of accessories as a result of this change.

What it means for the CI20 is that its primary expansion header cannot satisfy the requirements of the expansion connector provided by this “hat” or board in its entirety. Instead, 14 pins of the board’s connector are left unconnected, with the board hanging over the side of the CI20 if mounted directly. Another issue is that the pinout of the board employs a pin as a data/command pin instead of as its designated function as a SPI data input pin. Whether the Raspberry Pi can configure itself to utilise this pin efficiently in this way might help to explain the configuration, but it isn’t compatible with the way such pins are assigned on the CI20.

Fortunately, the CI20’s designers exposed a SPI peripheral via a secondary header, including a dedicated data/command pin, meaning that a few jumper wires can connect the relevant pins to the appropriate connector pins. After some tedious device driver implementation and accompanying frustration, the screen could be persuaded to show an image. With the SPI peripheral being used instead of “bit banging”, or driving the data transfer to the screen controller directly in software, it became possible to use DMA to have the screen image repeatedly sent. And with that, the screen can be used to continuously reflect the contents of a generic framebuffer, becoming like a tiny monitor.

The board also has a speaker that can be driven using I2S communication. The CI20 doesn’t expose I2S signals via the header pins, instead routing I2S audio via the HDMI connector, analogue audio via the headphone socket, and PCM audio via the Wi-Fi/Bluetooth chip, presumably supporting Bluetooth audio. Fortunately, I have another means of testing the speaker, so I didn’t waste half of my money buying this board!

Thursday, 26 October 2023

FSFE information stall on Veganmania Rathausplatz 2023

FSFE information stall in October 2023 in Vienna on Rathausplatz
Detail of FSFE information stall in October 2023 in Vienna on Rathausplatz

To celebrate the 25th anniversary of the Veganmania summer festivals this year a third instance of the festival took place between October 6th and 8th. It was the biggest ever. And once more volunteers of the local FSFE supporters manned an information stall about free software and the relevance of independence on our computing devices.

It is somewhat odd that over the years it has became somewhat daunting to describe the tested ingredients for a very successful information desk again and again. But we once more could use this opportunity to talk to many interested passers by and to tell them about the advantages free software grants its users and how important this is in a society that increasingly interacts using electronic devices. Therefore, it shouldn’t still surprise how well the comparably big and thick Public Money, Public Code brochures are received. I would never have guessed that such a large and rather technical take away would find so many people appreciating it enough to carry it around with them for hours. The newest delivery of materials ordered for this event were exhausted before the street festival concluded. Even if I have long switched to making specifically large orders because the usual information desk packages the FSFE office suggests wouldn’t even last for two hours on the events we take part in. Luckily we could at least re-stock our most important orange leaflet listing ten of the most widely used GNU distributions combined with a few words explaining the differences mid-way through the event because a print shop had open on the weekend and we could print it out early in the morning before people showed up for the street festival.

Despite taking place in Autumn the weather was mostly very mild. Unfortunately, on the last day rain still caught up with us by slowly growing stronger from a very light spree to proper rain. Therefore, it was hard to decide when we should actually pack-up our information material in order to protect it from getting destroyed by the rain. Especially because the people on the festival didn’t seem to mind and carried on visiting our desk. But at some point stapling the leaflets under our umbrella wasn’t good enough any longer and we needed to remove the top soaked through materials and finally pack-up. So we quit an hour early but probably didn’t sacrifice very much potential. At least this way we ourselves had the chance to actually grab some delicious food from the plentiful offerings at the Veganmania.

Our posters have become rather worn out over the years and don’t look very well any longer. We need to replace them. Probably made out of more resilient material than paper. Then the constant putting them up with sellotape and removing it after our information desks are done shouldn’t have an irreversible effect on them any longer.

But the end of the event wasn’t all that came from: This time a manager of a volunteer-led local museum was quick to follow-up on our recommendation to not throw away older laptops that couldn’t run the pre-installed proprietary operating system any longer. And it wouldn’t be a good idea either because there wouldn’t be any further security updates either. So, a few days after the event we installed GNU systems with a lightweight desktop on four laptops and she also asked for a talk in the museum where we could explain the value of free software. And she even suggested that she could make a lottery among the attendees to win two of the revived devices. It is planned to happen at some point next year. We are looking forward to that.

Promote Digital Maturity by reading books to others

In July this year I participated at Digitalcourage's "Aktivcongress", a yearly meeting for activists in Germany. Digitalcourage has been campaigning for a world worth living in the digital age since 1987. I participated in sessions about Free Software, had a lightning talk about the "I love Free Software day", and I did two readings of the book "Ada & Zangemann - A Tale of Software, Skateboards, and Raspberry Ice Cream" to inspire participants from other organisations how they could use the book to better accomplish their goals.

Reading by myself at a school in Boston, US

The feedback about the book was great, especially the fact that all the materials to read aloud yourself, presentation slides with the illustrations, and the texts with markers to change slides are available in our git repository (thanks to the many contributors who made it possible that the book is meanwhile available in Arabic, Danish, English, French, German, Italian, Ukrainian, and Valencian, and there are more translation efforts going on).

Furthermore I had many interesting conversations with Jessica Wawrzyniak and Leena Simon, who wrote books about digital topics in German-- so we thought we team up and raise awareness about the books, and the ways how you can use them to foster digital rights.

One occasion to make use of those books is now Friday, 17 November, which is the nationwide Read Aloud Day in Germany, when everyone is encouraged to read to children in a kindergarten, school, library or other social institution. Together with Digitalcourage e.V. the FSFE support this nice tradition and we are promoting the reading of available books that highlight important aspects of digital rights on this day.

As Jessica Wawrzyniak, media educator at Digitalcourage wrote "Topics relating to digitization, media use and data protection are not yet sufficiently addressed in daycare centers, schools and social institutions, but they can be addressed in a child-friendly way."

The largest cinema room in Offenburg before the reading of "Ada&Zangemann" to over 150 3rd graders from Offenburg schools.

In recent months, I read the book to over 1000 children. It was always a great fulfilment to discuss the book with the participants, and see how they are afterwards motivated to tinker, be creative, while they still also think about topics like equality, inclusion, democracy and activism.

Children and young people should be encouraged to stand up for their basic rights, including the right to informational self-determination, and to shape the information technology world in a self-determined and responsible way -- which includes Free Software.

So we encourage you to grab one of those books, or others you enjoy which are suitable for reading to children and young adults, read it to others, and have discussions with them.

If you live in Germany, you can use the 17 November. But do not feel limited by that. Reading books to others and discussing topics, you feel are important for society, should not be limited to one day.

Thursday, 19 October 2023

Google Summer of Code Mentor Summit 2023

This past weekend I attended the Google Summer of Code Mentor Summit 2023 as part of the KDE delegation.



I have been a mentor for GSOC almost every year since 2005 but this was my first time attending the mentor summit.


There were sessions about the typical things you'd expect: how to get more diverse folks as students, how to make sure we onboard them correctly, sustainability, funding, etc. All in all nothing groundbreaking and sadly no genius solution for the issues we face was given, but to a certain degree it helps to see that most of us have similar problems and it's not that we're doing things particularly wrong, it's just that running a Free Software project is though.

Carl Schwan and me ran a Desktop Linux session together with Jonathan Blandford of GNOME (check his Crosswords game, seems pretty nice) and basically asked folks "How happy are you with the Desktop Linux", you can find the notes about it at Nothing we don't know about really, Wayland and flatpak/snap are still a bit painful for some folks even if there's a general agreement they are good ideas.

I also organized a little session for all the attendees from Barcelona (it was about 6 of us or so) to sell them talk about Barcelona Free Software

One thing that always pops up in your mind when going to events is "How useful was it for me to attend this" since traveling to California from Europe is not easy, it is not cheap and it means investing quite some time (which in my case included taking vacation from work). 


Honestly, I think it's quite useful and we should attend more similar events. We get to know key people from other projects and we make sure other projects know about us. One of the most funny interactions was me sitting in a table, someone joining and saying "Woah KDE, you guys are super famous, love your work" and literally seconds after another person joining us and saying "Uh, KDE what is that?"


There's not much pictures because Google forbids taking pictures inside their buildings, the few exceptions include the chocolate table, it's quite a large quantity of chocolate we got to try, thanks Robert from Musicbrainz for pushing people to bring it :)

I'd like to thank Google and KDE e.V. for sponsoring my trip to the Summit, please donate at

Wednesday, 18 October 2023

KDE February Mega Release schedule (the thing with Qt6 on it)

The next release for the big three in KDE land (KDE Frameworks, KDE Plasma and KDE Gear) is going to happen at the same time.

This is because we are switching to Qt6[*] and it helps if we can release all the products  at the same time.

If you want to help us with the effort, make sure to donate at

The agreed schedule is:

8 November 2023: Alpha

KDE Gear 24.01.75 / KDE Plasma 5.80.0 / KDE Frameworks 5.245.0

29 November 2023: Beta 1

KDE Gear 24.01.80 / KDE Plasma 5.90.0 / KDE Frameworks 5.246.0

20 December 2023: Beta 2

KDE Gear 24.01.85 / KDE Plasma 5.91.0 / KDE Frameworks 5.247.0

10 January 2024: Release Candidate 1

KDE Gear 24.01.90 / KDE Plasma 5.92.0 / KDE Frameworks 5.248.0

For KDE Gear that want to ship with Qt6 for this release they need to be switched to Qt6 (and obviously stable) *BEFORE* this date.

31 January 2024: Release Candidate 2

KDE Gear 24.01.95 / KDE Plasma 5.93.0 / KDE Frameworks 5.249.0

21 February 2024: Private Tarball Release

KDE Gear 24.02.0 / KDE Plasma 6.0 / KDE Frameworks 6.0

28 February 2024: Public Release

KDE Gear 24.02.0 / KDE Plasma 6.0 / KDE Frameworks 6.0 


You can see that Alpha is less than 3 weeks away! Interesting times ahead!


[*]  some KDE Gear apps may remain in Qt5 if we have not had time to port them

Wednesday, 27 September 2023

Am I using qmllint wrong? Or is it still not there?

Today I was doing some experiments with qmllint hoping it would help us make QML code more robust.

I created a very simple test which is basically a single QML file that creates an instance of an object I've created from C++.

But when running qmllint via the all_qmllint  target it tells me

Warning: Main.qml:14:9: No type found for property "model". This may be due to a missing import statement or incomplete qmltypes files. [missing-type]
        model: null
Warning: Main.qml:14:16: Cannot assign literal of type null to QAbstractItemModel [incompatible-type]
        model: null

Which is a relatively confusing error, since it first says that it doesn't know what the model property is, but then says "the model property is an QAbstractItemModel and you can't assign null to it"

Here the full code in case you want to fully reproduce but first some samples of what i think it's important


import QtQuick
import QtQuick.Window

import untitled1 // This is the name of my import

Window {
    // things     
    ObjectWithModel {
        model: null

HEADER FILE (there's nothing interesting in the cpp file)

#pragma once

#include <QtQmlIntegration>
#include <QAbstractItemModel>
#include <QObject>

class ObjectWithModel : public QObject {
    Q_PROPERTY(QAbstractItemModel* model READ model WRITE setModel NOTIFY modelChanged)

    explicit ObjectWithModel(QObject* parent = nullptr);  

    AbstractItemModel* model() const;
    void setModel(QAbstractItemModel* model);

    void modelChanged();

    QAbstractItemModel* mModel  = nullptr;


cmake_minimum_required(VERSION 3.16)
project(untitled1 VERSION 0.1 LANGUAGES CXX)
find_package(Qt6 6.4 REQUIRED COMPONENTS Quick)

qt_add_executable(appuntitled1 main.cpp)

    URI untitled1 VERSION 1.0
    QML_FILES Main.qml
    SOURCES ObjectWithModel.h ObjectWithModel.cpp

target_link_libraries(appuntitled1 PRIVATE Qt6::Quick)  

As you can see it's quite simple and, as far as I know, using the recommended way of setting up a QML module when using a standalone app.


But maybe I am holding it wrong?

Friday, 22 September 2023

Seafile Mirror - Simple automatic backup of your Seafile libraries

I have been using Seafile for years to host and synchronise files on my own server. It’s fast and reliable, especially when dealing with a large number and size of files. But making reliable backups of all its files isn’t so trivial. This is because the files are stored in a layout similar to bare Git repositories, and Seafile’s headless tool, seafile-cli, is… suboptimal. So I created what started out as a wrapper for it and ended up as a full-blown tool for automatically synchronising your libraries to a backup location: Seafile Mirror.

My requirements

Of course, you could just take snapshots of the whole server, or copy the raw Seafile data files and import them into a newly created Seafile instance as a disaster recovery, but I want to be able to directly access the current state of the files whenever I need them in case of an emergency.

It was also important for me to have a snapshot, not just another real-time sync of a library. This is because I also want to have a backup in case I (or an attacker) mess up a Seafile library. A real-time sync would immediately fetch that failed state.

I also want to take a snapshot at a configurable interval. Some libraries should be synchronised more often than others. For example, my picture albums do not change as often as my miscellaneous documents, but they use at least 20 times the disk space and therefore network traffic when running a full sync.

Also, the backup service must have read-only access to the files.

A version controlled backup of the backup (i.e. the plain files) wasn’t in scope. I handle this separately by backing up my backup location, which also contains similar backups of other services and machines. For this reason, my current solution does not do incremental backups, even though this may be relevant for other use cases.

The problems

Actually, seafile-cli should have been everything you’d need to fulfill the requirements. But no. It turned out that this tool has a number of fundamental issues:

  • You can make the host the tool is running on a sync peer. However, it easily leads to sync errors if the user just has read-only permissions to the library.
  • You can also download a library but then again it may lead to strange sync errors.
  • It requires a running daemon which crashes irregularly during larger sync tasks or has other issues.
  • Download/sync intervals cannot be set manually.

The solution

seafile-mirror takes care of all these stumbling blocks:

  • It downloads/syncs defined libraries in customisable intervals
  • It de-syncs libaries immediately after they have been downloaded to avoid sync errors
  • You can force-re-sync a library even if its re-sync interval hasn’t reached yet
  • Extensive informative and error logging is provided
  • Of course created with automation in mind so you can run it in cronjobs or systemd triggers
  • And as explained, it deals with the numerous caveats of seaf-cli and Seafile in general

Full installation and usage documentation can be found in the project repository. Installation is as simple as running pip3 install seafile-mirror, and a sample configuration is provided.

In my setup, I run this application on a headless server with systemd under a separate user account. Therefore the systemd service needs to be set up first. This is also covered in the tool’s documentation. And as an Ansible power user, I also provide an Ansible role that does all the setup and configuration.

Possible next steps

The tool has been running every day since a couple of months without any issues. However, I could imagine a few more features to be helpful for more people:

  • Support of login tokens: Currently, only user/password auth is supported which is fine for my use-case as it’s just a read-only user. This wouldn’t be hard to fix either, seafile-cli supports it (at least in theory). (#2)
  • Support of encrypted libraries: Shouldn’t be a big issue, it would require passing the password to the underlying seafile-cli command. (#3)

If you have encountered problems or would like to point out the need for specific features, please feel free to contact me or comment on the Mastodon post. I’d also love to hear if you’ve become a happy user of the tool 😊.

Wednesday, 13 September 2023

Importance of more inclusive illustrations

Recently I received an e-mail with pictures which touched me, and which showed me how important it is to think about diversity when creating illustrations.

16:9 version of the Arabic book cover

The photos were taken in a school at a hospital run by an international medical organisation that operates in the Middle East and showed children reading the Arabic translation of Ada & Zangemann - A Tale of Software, Skateboards, and Raspberry Ice Cream.

The hospital does surgery for "war victims, mostly people who have lost a limb (often because of a landmine) or suffered burns (usually because of bombings)." The pictures showed children from surrounding countries (Yemen, Syria, Irak mostly) who because of their condition usually have to stay at hospital away from their country for several months, often years.

"So while I can't guarantee that thousands of kids will read those copies of the book, I can promise that they do make a huge difference for the kids who do. Most of them have a 3d printed arm or leg, or a compression mask to help with burn healing. I suspect that the concept of being able to tinker with software and tools around them will ring a bell (the prosthetics you see in the video above are all 3D printed on site by [the organisation])." (The quotes are from the e-mail I received.)

For the book, Wiebke (editor), Sandra (illustrator) and I spent significant time to discuss the inclusiveness of the characters. Sandra's experience with inclusiveness was one of the reasons why I approached her: to see if she would like to work with us on the book: considering inclusiveness without distracting the reader from the main story. Receiving this e-mail, and looking at the pictures showed me again that every minute we spent on thinking about inclusiveness was worth it.

Page from Arabic book with children at a workshop

A lot of people will not realise it when they read the book and look at the illustrations, but taking a closer look, you will realise that one of the characters in the book is using a 3d printed leg. For readers with physical impairments, this tiny detail can make a huge difference.

Sunday, 03 September 2023

FSFE information stall on Veganmania Donauinsel 2023

FSFE information stall in August 2023 in Vienna on Donauinsel
Detail of FSFE information stall in Sugust 2023 in Vienna on Donauinsel
Detail of FSFE information stall in Sugust 2023 in Vienna on Donauinsel

On the second Veganmania street festival this year taking place on the Danube island in Vienna from August 25. to 27. we finally managed to borrow a sturdy tent. We could get it for free from a local animal rights organisation. This was great for withstanding the high temperatures during the event because it provided urgently needed shade. The only downside was that the name of the well known organisation was printed onto the tent. This caused many people to mistake our information stall for one of this other organisation despite none of our banners, posters and information indicated any relation to this subject or organisation – at least on first glance. Of course this didn’t hinder us to clarify the confusion and to point out the most important subject on our desk: independence on personal electronic devices.

As usual many people did use this opportunity to learn more about free software and the advantages it brings. Other than that we had many encounters with people who already use free software and were as happy as surprised to find us at this event. Of course we could easily give the reasoning why we feel that free software is a perfect addition to a vegan lifestyle. After all, most people decide to go vegan because they don’t want to harm others. And if you apply the same thought to the world of software you end up with free software.

Again I need to order more information material for the next instalment of our information desk on 8. and 9. October this year at the third Veganmania summer festival in Vienna in front of the city hall. Usually there are only two Veganmanias each year but since it is the 25. anniversary of the event in 2023 a third one will take place at this prestigious and hard to get location.
We noticed an interesting re-occurring phenomenon concerning a difference in how male and female people approach our information desk. Of course this is just a tendency and there are exceptions but in general most men will only approach our desk because they already know about free software and they want to check out what material we do offer. And female visitors of our desk mostly aren’t familiar yet with free software but are willing to find out what it is about.

Many people were especially interested in ways to improve their privacy and independence on their mobile phones. Unfortunately many of those did use iOS devices and we couldn’t offer them any solutions on this totally locked down platform. Android is far from being ideal but it at least gives most users the opportunity to go for more privacy focused solutions. Even if they didn’t want to forego all proprietary software they can at least use F-Droid as an app store to add free software apps to their mix. And it is of course always good to know that you can actually upcycle your mobile after the original OS stopped providing security updates by installing a free alternative Android system like LineageOS.

Especially the brochure for decision makers in the public sector investigating what advantages free software brings to the table in this area is still in higher demand than I anticipated. I really need to order more of those.

A large amount of different stickers seems to attract many people. I need to replace some of my over the years rather worn out posters. And I am still not certain if I should actually invest in my own tent because one that can withstand wind, rain and many years of service isn’t cheap. But using a tent with the information of an other organisation printed on it hasn’t proven to be ideal for the confusion it creates.

I also consider joining the annual Volksstimmefest with our FSFE information stall, but I am not convinced how good this idea is because it seems to be more focused on concerts and has a clear tendency to be a left wing political event. Since I don’t consider free software to be a predominantly left wing subject I am somewhat reluctant to position it so clearly in this spectrum.
Manning the desk for three days was somewhat exhausting since my usual helper couldn’t be there due to a clash of appointments. Nevertheless, I consider the information desk on Veganmania 2023 on the Danube island as an other successful event where I was able to inform many people about ways to improve their independency in the digital realm by employing free software.

Tuesday, 25 July 2023

PGPainless meets the Web-of-Trust

We are very proud to announce the release of PGPainless-WOT, an implementation of the OpenPGP Web of Trust specification using PGPainless.

The release is available on the Maven Central repository.

The work on this project begun a bit over a year ago as an NLnet project which received funding through the European Commission’s NGI Assure program. Unfortunately, somewhere along the way I lost motivation to work on the project, as I failed to see any concrete users. Other projects seemed more exciting at the time.

NLnet Logo
NGI Assure Logo

Fast forward to end of May when Wiktor reached out and connected me with Heiko, who was interested in the project. We two decided to work together on the project and I quickly rebased my – at this point ancient and outdated – feature branch onto the latest PGPainless release. At the end of June, we started the joint work and roughly a month later today, we can release a first version 🙂

Big thanks to Heiko for his valuable contributions and the great boost in motivation working together gave me 🙂
Also big thanks to NLnet for sponsoring this project in such a flexible way.
Lastly, thanks to Wiktor for his talent to connect people 😀

The Implementation

We decided to write the implementation in Kotlin. I had attempted to learn Kotlin multiple times before, but had quickly given up each time without an actual project to work on. This time I stayed persistent and now I’m a convinced Kotlin fan 😀 Rewriting the existing codebase was a breeze and the line count drastically reduced while the amount of syntactic sugar which was suddenly available blow me away! Now I’m considering to steadily port PGPainless to Kotlin. But back to the Web-of-Trust.

Our implementation is split into 4 modules:

  • pgpainless-wot parses OpenPGP certificates into a generalized form and builds a flow network by verifying third-party signatures. It also provides a plugin for pgpainless-core.
  • wot-dijkstra implements a query algorithm that finds paths on a network. This module has no OpenPGP dependencies whatsoever, so it could also be used for other protocols with similar requirements.
  • pgpainless-wot-cli provides a CLI frontend for pgpainless-wot
  • wot-test-suite contains test vectors from Sequoia PGP’s WoT implementation

The code in pgpainless-wot can either be used standalone via a neat little API, or it can be used as a plugin for pgpainless-core to enhance the encryption / verification API:

/* Standalone */
Network network = PGPNetworkParser(store).buildNetwork();
WebOfTrustAPI api = new WebOfTrustAPI(network, trustRoots, false, false, 120, refTime);

// Authenticate a binding
    api.authenticate(fingerprint, userId, isEmail).isAcceptable());

// Identify users of a certificate via the fingerprint
    "Alice <>",

// Lookup certificates of users via userId
LookupAPI.Result result = api.lookup(
    "Alice <>", isEmail);

// Identify all authentic bindings (all trustworthy certificates)
ListAPI.Result result = api.list();

/* Or enhancing the PGPainless API */
CertificateAuthorityImpl wot = CertificateAuthorityImpl
    .webOfTrustFromCertificateStore(store, trustRoots, refTime)

// Encryption
EncryptionStream encStream = PGPainless.encryptAndOrSign()
    // Add only recipients we can authenticate
    .addAuthenticatableRecipients(userId, isEmail, wot)

// Verification
DecryptionStream decStream = [...]
[...]  // finish decryption
MessageMetadata metadata = decStream.getMetadata();
assertTrue(metadata.isAuthenticatablySignedBy(userId, isEmail, wot));

The CLI application pgpainless-wot-cli mimics Sequoia PGP’s neat sq-wot tool, both in argument signature and output format. This has been done in an attempt to enable testing of both applications using the same test suite.

pgpainless-wot-cli can read GnuPGs keyring, can fetch certificates from the Shared OpenPGP Certificate Directory (using pgpainless-cert-d of course :P) and ingest arbitrary .pgp keyring files.

$ ./pgpainless-wot-cli help     
Usage: pgpainless-wot [--certification-network] [--gossip] [--gpg-ownertrust]
                      [--time=TIMESTAMP] [--known-notation=NOTATION NAME]...
                      [-r=FINGERPRINT]... [-a=AMOUNT | --partial | --full |
                      --double] (-k=FILE [-k=FILE]... | --cert-d[=PATH] |
                      --gpg) [COMMAND]
  -a, --trust-amount=AMOUNT
                         The required amount of trust.
      --cert-d[=PATH]    Specify a pgp-cert-d base directory. Leave empty to
                           fallback to the default pgp-cert-d location.
                         Treat the web of trust as a certification network
                           instead of an authentication network.
      --double           Equivalent to -a 240.
      --full             Equivalent to -a 120.
      --gossip           Find arbitrary paths by treating all certificates as
                           trust-roots with zero trust.
      --gpg              Read trust roots and keyring from GnuPG.
      --gpg-ownertrust   Read trust-roots from GnuPGs ownertrust.
  -k, --keyring=FILE     Specify a keyring file.
      --known-notation=NOTATION NAME
                         Add a notation to the list of known notations.
      --partial          Equivalent to -a 40.
  -r, --trust-root=FINGERPRINT
                         One or more certificates to use as trust-roots.
      --time=TIMESTAMP   Reference time.
  authenticate  Authenticate the binding between a certificate and user ID.
  identify      Identify a certificate via its fingerprint by determining the
                  authenticity of its user IDs.
  list          Find all bindings that can be authenticated for all
  lookup        Lookup authentic certificates by finding bindings for a given
                  user ID.
  path          Verify and lint a path.
  help          Displays help information about the specified command

The README file of the pgpainless-wot-cli module contains instructions on how to build the executable.

Future Improvements

The current implementation still has potential for improvements and optimizations. For one, the Network object containing the result of many costly signature verifications is currently ephemeral and cannot be cached. In the future it would be desirable to change the network parsing code to be agnostic of reference time, including any verifiable signatures as edges of the network, even if those signatures are not yet – or no longer valid. This would allow us to implement some caching logic that could write out the network to disk, ready for future web of trust operations.

That way, the network would only need to be re-created whenever the underlying certificate store is updated with new or changed certificates (which could also be optimized to only update relevant parts of the network). The query algorithm would need to filter out any inactive edges with each query, depending on the queries reference time. This would be far more efficient than re-creating the network with each application start.

But why the Web of Trust?

End-to-end encryption suffers from one major challenge: When sending a message to another user, how do you know that you are using the correct key? How can you prevent an active attacker from handing you fake recipient keys, impersonating your peer? Such a scenario is called Machine-in-the-Middle (MitM) attack.

On the web, the most common countermeasure against MitM attacks are certificate authorities, which certify the TLS certificates of website owners, requiring them to first prove their identity to some extent. Let’s Encrypt for example first verifies, that you control the machine that serves a domain before issuing a certificate for it. Browsers trust Let’s Encrypt, so users can now authenticate your website by validating the certificate chain from the Let’s Encrypt CA key down to your website’s certificate.

The Web-of-Trust follows a similar model, with the difference, that you are your own trust-root and decide, which CA’s you want to trust (which in some sense makes you your own “meta-CA”). The Web-of-Trust is therefore far more decentralized than the fixed set of TLS trust-roots baked into web browsers. You can use your own key to issue trust signatures on keys of contacts that you know are authentic. For example, you might have met Bob in person and he handed you a business card containing his key’s fingerprint. Or you helped a friend set up their encrypted communications and in the process you two exchanged fingerprints manually.

In all these cases, in order to initiate a secure communication channel, you needed to exchange the fingerprint via an out-of-band channel. The real magic only happens, once you take into consideration that your close contacts could also do the same for their close contacts, which makes them CAs too. This way, you could authenticate Charlie via your friend Bob, of whom you know that he is trustworthy, because – come on, it’s Bob! Everybody loves Bob!

An example OpenPGP Web-of-Trust Network diagram.
An example for an OpenPGP Web-of-Trust. Simply by delegating trust to the Neutron Mail CA and to Vincenzo, Aaron is able to authenticate a number of certificates.

The Web-of-Trust becomes really useful if you work with people that share the same goal. Your workplace might be one of them, your favorite Linux distribution’s maintainer team, or that non-Profit organization/activist collective that is fighting for a better tomorrow. At work for example, your employer’s IT department might use a local CA (such as an instance of the OpenPGP CA) to help employees to communicate safely. You trust your workplace’s CA, which then introduces you safely to your colleagues’ authentic key material. It even works across business’ boundaries, e.g. if your workplace has a cooperation with ACME and you need to establish a safe communication channel to an ACME employee. In this scenario, your company’s CA might delegate to the ACME CA, allowing you to authenticate ACME employees.

As you can see, the Web-of-Trust becomes more useful the more people are using it. Providing accessible tooling is therefore essential to improve the overall ecosystem. In the future, I hope that OpenPGP clients such as MUAs (e.g. Thunderbird) will embrace the Web-of-Trust.

Monday, 17 July 2023

KDE Gear 23.08 branches created

Make sure you commit anything you want to end up in the KDE Gear 23.08 releases to them

Dependency freeze is next July 20

The Feature Freeze and Beta is Thursday 27 of July.

More interesting dates  
  August 10: 23.08 RC (23.07.90) Tagging and Release
  August 17: 23.08 Tagging
  August 24: 23.08 Release

Wednesday, 12 July 2023

A Truly Free AI

Understanding what makes a software Free (as in freedom) has been going on since the beginning of the Free Software movement in the 80’s (at least). This led to the Free Software licenses, which help users to control the technology they use. However, considering the peculiarities of Artificial Intelligence (AI) software, one may wonder whether those licenses account for those.

Free Software licenses were designed so that users control technology, and facilitate their collaboration. Software released under a Free Software license guarantees that users can use, study, share and improve it however they want, with anybody they want. Once one accesses the source code and the accompanying license(s), he or she can run the software. Indeed, most software runs on commodity hardware. However, this is not true for AI and deep learning, the branch of AI powering most of the recent successful AI technologies.

In Artificial Intelligence, deep learning is a part of machine learning and is usually composed of 5 elements: data, a model and its parameters, the definition of a problem (in the form of a loss function) which ties the data and the model together, a training phase and an inference phase. The goal of the learning phase (training) is to modify the model’s parameters so that the model gets incrementally better at solving the problem i.e. at minimizing the loss function. Once the loss stops decreasing, the model cannot learn further and the parameters stop changing. Using those parameters, one can make predictions with data not used during the learning phase: this is the inference phase. In deep learning, those parameters are the weights of interconnected neurons which form an artificial neural network.

But here is the problem: the number of parameters used for deep learning is enormous and keeps increasing. Likewise, the amount of data is getting enormous, to a point where using deep learning on commodity hardware is no longer possible. This raises the question of what would make an AI truly Free: what is the point of an AI published as Free Software if most users cannot exercise the 4 freedoms endowed by existing Free Software definition and licenses? Even though one might access the data and the code used for training, they would not be able to train the AI, improve it and share the results. Those who can afford to train the AI (modify the weights of deep learning models) are in a very powerful position compared to those who cannot. The AI being Free Software therefore does not necessarily guarantee that users stay in control of technology. What would be required to make an AI Free Software in the sense that it allows users to control it?

A truly Free AI would need to be easy to train by their users. This requires the trained model’s parameters to be easily accessible so that they can be used as a starting point for training, rather than adjusting the parameters from scratch (usually from randomly initialized parameters). Deep learning weights should thus be Free Software. The number of parameters and the amount of data required to improve the AI software would also need to be manageable. If the data and its precise description cannot be shared, the use of Open Standard would facilitate the creation of alternative datasets.

AI is not going away. Since the rise of deep learning in the last decade, triggered by the availability of more data, improved methods for stabilizing and speeding up the training of deep neural networks and improved hardware, the use of AI is becoming more and more mainstream. And now that we start to understand how powerful the AI genie is, it cannot be put back in the bottle. This raises the question of how to stay in control of technology in a world where AI is bound to become more powerful and ubiquitous. Free Software is a key part of the answer.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

                  Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Paul Boddie's Free Software-related blog  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  English – nico.rikken’s blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – hesa's Weblog  Free as LIBRE  Free software - Carmen Bianca Bakker  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  In English —  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nikos Roussos - opensource  Posts on Hannes Hauswedell  Pressreview  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english on Björn Schießle - I came for the code but stayed for the freedom  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog