Free Software, Free Society!
Thoughts of the FSFE Community (English)

Tuesday, 19 July 2022

Creating a Web-of-Trust Implementation: Certify Keys with PGPainless

Currently I am working on a Web-of-Trust implementation for the OpenPGP library PGPainless. This work will be funded by the awesome NLnet foundation through NGI Assure. Check them out! NGI Assure is made possible with financial support from the European Commission’s Next Generation Internet programme.

NLnet
NGI Assure

Technically, the WoT consists of a graph where the nodes are OpenPGP keys (certificates) with User-IDs and the edges are signatures. I recommend watching this talk by Justus Winter (22:51) to get an overview of what the benefits of the WoT are. In order to be able to create a WoT, users need to be able to sign other users certificates to create those edges.

Therefore, support for signing other certificates and User-IDs was added to PGPainless as a milestone of the project. Since release 1.3.2, users have access to a straight-forward API to create signatures over other users certificates. Let’s explore the new API together!

There are two main categories of signatures which are important for the WoT:

  • Certifications are signatures over User-IDs on certificates. A certification is a statement “I believe that the person holding this certificate goes by the name ‘Alice alice@pgpainless.org‘”.
  • Delegations are signatures over certificates which can be used to delegate trust decisions to the holder of the signed certificate.

This is an example for how a user can certify a User-ID:

PGPSecretKeyRing aliceKey = ...;
PGPPublicKeyRing bobsCertificate = ...;

CertificationResult result = PGPainless.certify()
        .userIdOnCertificate("Bob Babbage <bob@pgpainless.org>",
                bobsCertificate)
        .withKey(aliceKey, protector)
        .build();

PGPSignature certification = result.getCertification();
// or...
PGPPublicKeyRing certifiedCertificate = result.getCertifiedCertificate();

It is possible to choose between different certification types, depending on the “quality” of the certification. By default, the certification is created as GENERIC_CERTIFICATION, but other levels can be chosen as well.

Furthermore, it is possible to modify the signature with additional signature subpackets, e.g. custom annotations etc.

In order to create a delegation, e.g. in order to delegate trust to an OpenPGP CA, the user can do as follows:

PGPSecretKeyRing aliceKey = ...;
PGPPublicKeyRing caCertificate = ...;

CertificationResult result = PGPainless.certify()
        .certificate(caCertificate,
                Trustworthiness.fullyTrusted().introducer())
        .withKey(aliceKey, protector)
        .buildWithSubpackets(new CertificationSubpackets.Callback() {
            @Override
            public void modifyHashedSubpackets(
                    CertificationSubpackets hashedSubpackets) {
                hashedSubpackets.setRegularExpression(
                        "<[^>]+[@.]pgpainless\.org>$");
            }
        });

PGPSignature delegation = result.getCertification();
// or...
PGPPublicKeyRing delegatedCertificate = result.getCertifiedCertificate();

Here, Alice decided to delegate to the CA as a fully trusted introducer, meaning Alice will trust certificates that were certified by the CA.

Depending on the use-case, it is advisable to scope the delegation, e.g. to a specific domain by adding a Regex packet to the signature, as seen above. As a result, Alice will only trust those certificates introduced by the CA, which have a user-id matching the regex. This is optional however.

Check out PGPainless and don’t forget to check out the awesome NLnet foundation!

Thursday, 14 July 2022

Nonbinary Grammatical Gender and Nonboolean Logic

For many years I have been a hobby linguist and also liked doing math. When learning French and Spanish long time ago, I discovered that Grammatical Gender is binary in these languages. Nouns are classified as female or male, a third neuter gender, as it exists in German does not exist. Adjectives and articles are gendered too. In Spanish and French the World (el mundo/le monde) while in German we say die Welt. German also has neuter as in Das U-Boot (a well known boot loader). Old English was gendering too, but in many cases this has been dropped. Other languages such as Finnish and Esperanto do not have a grammatical gender, or more precisely it is unary in these languages. Only one form exists. In Finnish the Moon is called kuu and in esperanto she is called Luno. Luno is derived from latin Luna, a Luna is the divine embodiment of the Moon. In many langues including Spanish and Russion Luna/луна is female. Not so in German where we say der Mond. In Esperanto Luno sound male, but remember there is no gender in that language. The o at the end just indicates that Luno is a noun.

When I studied computer science I heard of “Aussagenlogik” which has two truth values. Those are True (Die Wahrheit) and False (Der Widerspruch) often represented as bits (binary digits). At that time I had never heard the term Nonbinary, but I had heard of Nonboolean Fuzzy Logic and Quantum Computing. In my head I added a third truth value Unknown (Das Unbekannte) which uses the third neuter gender. When one operand of a binary operator is unknown, the whole result becoms unknown. With Quantum Computing we do not have bits, instead qbits which are superpositions of one and zero. My gender feels the same, it is a superposition of both male and female, so I prefer to call myself genderqueer.

Tuesday, 12 July 2022

KDE Gear 22.08 branches created

Make sure you commit anything you want to end up in the KDE Gear 22.08
releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 14 of July.

More interesting dates
  August 4, 2022: 22.08 RC (22.07.90) Tagging and Release
  August 11, 2022: 22.08 Tagging
  August 18, 2022: 22.08 Release

https://community.kde.org/Schedules/KDE_Gear_22.08_Schedule
 

Sunday, 03 July 2022

The deep learning obesity crisis

Deep learning have made dramatic improvements over the last decades. Part of this is attributed to improved methods that allowed training wider and deeper neural networks. This can also be attributed to better hardware, as well as the development of techniques to use this hardware efficiently. All of this leads to neural networks that grow exponentially in size. But is continuing down this path the best avenue for success?

Deep learning models have gotten bigger and bigger. The figure below shows the accuracy of convolutional neural networks (left) and the size and number of parameters used for the Imagenet competition (right). While the accuracy is increasing and reaching impressive levels, the models get both bigger and use more and more resources. In Schwartz et al., 2020, as a result of rewarding more accuracy than efficiency, it is stated that the amount of compute have increased 300k-fold in 6 years which implies environmental costs as well as increasing the barrier to entry in the field.

Size of deep learning models

Deep learning models get better over time but also increases in size (Canziani et al., 2016).

There may be a correlation between the test loss, the number of model parameters, the amount of compute and the dataset size. The loss gets smaller as the network gets bigger, more data is processed and more compute capacity is added, which suggests a power law is at work, and that the predictions from deep learning models can only become more accurate. Does that mean neural network are bound to getting bigger? Is there an upper limit above which the rate of accuracy change slows down? In that case, changing the paradigm or finding how to get the most out of each parameter would be warranted, so that the accuracy may keep increasing without always throwing more neurons and data at the problem.

Changing the paradigm would require to change perspective and go past deep learning, which would be, giving its tremendous success, a very risky strategy which would almost certainly hamper progress on the short term. As workarounds (which do not address the underlying problem), it may be wise to reduce the models' size during their training as a way to make them smaller. Three strategies may be employed to that end: dropout, pruning and quantization.

Dropout

Dropout tries to make sure neurons are diverse enough inside the network, thereby maximizing the usefulness of each of them. To do that, a dropout layer is added between linear connections that randomly deactivates neurons during each forward pass through the neural network. This is done only the training (i.e. not during inference). By randomly deactivating neurons during training, one can force the network to learn with an ever-changing structure, thereby incentivzing all neurons to take part to the training. The code below shows how to use dropout in a PyTorch model definition:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
class NeuralNetwork(torch.nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
        super(NeuralNetwork, self).__init__()
        self.inputsiz = inpsiz
		self.dropout = torch.nn.Dropout(p=0.5)
        self.l1 = torch.nn.Linear(inpsiz, hidensiz)
        self.relu = torch.nn.ReLU()
        self.l2 = torch.nn.Linear(hidensiz, numclases)

    def forward(self, y):
        outp = self.l1(y)
        outp = self.relu(outp)
		outp = self.dropout(outp)
        outp = self.l2(outp)

        return outp

A dropout that will randomly deactivate half of the neuron’s layer is defined line 5, and is used in the forward pass line 13.

Pruning

Pruning refers to dropping connections between neurons, therefore making the model slimmer. Pruning a neural network begs the question of identifying the parts of it which should be pruned. This can be done by considering the magnitude of the neurons' weights (because small weights may not contribute much to the overall result) or their relative importance towards the model’s output as a whole.

In PyTorch, pruning based on the weights' magnitude may be done with the ln_structured function:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import torch
import torch.nn.utils.prune


# An example model
class NeuralNetwork(torch.nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
        super(NeuralNetwork, self).__init__()
        self.inputsiz = inpsiz
        self.l1 = torch.nn.Linear(inpsiz, hidensiz)
        self.relu = torch.nn.ReLU()
        self.l2 = torch.nn.Linear(hidensiz, numclases)

    def forward(self, y):
        outp = self.l1(y)
        outp = self.relu(outp)
        outp = self.l2(outp)

        return outp


model = NeuralNetwork(784, 100, 10)

torch.nn.utils.prune.ln_structured(model.l1, name="weight", amount=0.5, n=2, dim=0)

The line 24 is responsible for the pruning, where the first layer of the model is half-pruned according to the l2 norm of its weights.

Quantization

Instead of dropping neurons, one may reduce their precision (i.e. the number of bytes used to store their weights) and thus the computing power needed to make use of them. This is called quantization. There exists 3 ways to quantize a model.

Dynamic quantization

Quantization may be done directly after the model is instantiated. In that case, the way to quantize is chosen at runtime and is done immediately.

model = NeuralNetwork(784, 100, 10)

torch.quantization.quantize_dynamic(
    model, {"l1": torch.quantization.default_dynamic_qconfig}
)

Adjusted quantization

Quantization can be calibrated (i.e. choose the right algorithm to convert floating point numbers to less precise ones) by using the data that is supposed to go through the model. This is done on a test dataset once the model has been trained:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
class NeuralNetworkQuant(torch.nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
        super(NeuralNetworkQuant, self).__init__()
        self.quant = torch.quantization.QuantStub()
        self.inputsiz = inpsiz
        self.l1 = torch.nn.Linear(inpsiz, hidensiz)
        self.relu = torch.nn.ReLU()
        self.l2 = torch.nn.Linear(hidensiz, numclases)
        self.dequant = torch.quantization.DeQuantStub()

    def forward(self, y):
        outp = self.l1(y)
        outp = self.relu(outp)
        outp = self.l2(outp)

        return outp

model = NeuralNetworkQuant(784, 100, 10)
train_model()
# The defualt config quantize to int8
model.qconfig = torch.quantization.get_default_qconfig("fbgemm")
model_fp32_prepared = torch.quantization.prepare(model)

testldr = torch.utils.data.DataLoader(dataset=testds, batch_size=1024, shuffle=True)
for idx, (imgs, lbls) in enumerate(testldr):
    imgs = imgs.reshape(-1, 28 * 28)
    model_fp32_prepared(imgs)

model_int8 = torch.quantization.convert(model_fp32_prepared)

How the model is supposed to be quantized and de-quantized is added to the model class on line 4 and 9. Line 22 prepares the model for quantization according to the configuration of line 21. Line 24-27 create a test dataset and run it through the prepared model so that the quantization process can be adjusted. Once the calibration is done, the model is quantized to int8 at line 29.

Quantization At Training

Quantization At Training (QAT) refers to optimizing the quantization strategy during the training of the model, which allows the model to optimize its weights while being aware of the quantization:

1
2
3
4
5
6
7
model = NeuralNetworkQuant(784, 100, 10)
model.qconfig = torch.quantization.get_default_qat_qconfig("fbgemm")
model_fp32_prepared = torch.quantization.prepare_qat(model)

train_model()

model_int8 = torch.quantization.convert(model_fp32_prepared)

This looks similar to the previous example, except that the training loop is done on the model_fp32_prepared model.

Can the trend towards bigger deep learning models be reverted? While research (e.g. Han et al., 2015; Howard et al., 2017) is pushing towards that goal, efficiency needs to be a priority.

Thursday, 30 June 2022

Gradual Explorations of Filesystems, Paging and L4Re

A surprising three years have passed since my last article about my efforts to make a general-purpose filesystem accessible to programs running in the L4 (or L4Re) Runtime Environment. Some of that delay was due to a lack of enthusiasm about blogging for various reasons, much more was due to having much of my time occupied by full-time employment involving other technologies (Python and Django mostly, since you ask) that limited the amount of time and energy that could be spent focusing on finding my way around the intricacies of L4Re.

In fact, various other things I looked into in 2019 (or maybe 2018) also went somewhat unreported. I looked into trying to port the “user mode” (UX) variant of the Fiasco.OC microkernel to the MIPS architecture used by the MIPS Creator CI20. This would have allowed me to conveniently develop and test L4Re programs in the GNU/Linux environment on that hardware. I did gain some familiarity with the internals of that software, together with the Linux ptrace mechanism, making some progress but not actually getting to a usable conclusion. Recommendations to use QEMU instead led me to investigate the situation with KVM on MIPS, simply to try and get half-way reasonable performance: emulation is otherwise rather slow.

You wouldn’t think that running KVM on anything other than Intel/AMD or ARM architectures were possible if you only read the summary on the KVM project page or the Debian Wiki’s KVM page. In fact, KVM is supported on multiple architectures including MIPS, but the latest (and by now very old 3.18) “official” kernel for the CI20 turned out to be too old to support what I needed. Or at least, I tried to get it to work but even with all the necessary configuration to support “trap and emulate” on a CPU without virtualisation support, it seemed to encounter instructions it did not emulate. As the hot summer of 2019 (just like 2018) wound down, I switched back to using my main machine at the time: an ancient Pentium 4 system that I didn’t want heating the apartment; one that could run QEMU rather slowly, albeit faster than the CI20, but which gave me access to Fiasco.OC-UX once again.

Since then, the hard yards of upstreaming Linux kernel device support for the CI20 has largely been pursued by the ever-patient Nikolaus Schaller, vendor of the Letux 400 mini-notebook and hardware designer of the Pyra, and a newer kernel capable of running KVM satisfactorily might now be within reach. That is something to be investigated somewhere in the future.

Back to the Topic

In my last article on the topic of this article, I had noted that to take advantage of various features that L4Re offers, I would need to move on from providing a simple mechanism to access files through read and write operations, instead embracing the memory mapping paradigm that is pervasive in L4Re, adopting such techniques to expose file content to programs. This took us through a tour of dataspaces, mapping, pages, flexpages, virtual memory and so on. Ultimately, I had a few simple programs working that could still read and write to files, but they would be doing so via a region of memory where pages of this memory would be dynamically “mapped” – made available – and populated with file content. I even integrated the filesystem “client” library with the Newlib C library implementation, but that is another story.

Nothing is ever simple, though. As I stressed the test programs, introducing concurrent access to files, crashes would occur in the handling of the pages issued to the clients. Since I had ambitiously decided that programs accessing the same files would be able to share memory regions assigned to those files, with two or more programs being issued with access to the same memory pages if they happened to be accessing the same areas of the underlying file, I had set myself up for the accompanying punishment: concurrency errors! Despite the heroic help of l4-hackers mailing list regulars (Frank and Jean), I had to concede that a retreat, some additional planning, and then a new approach would be required. (If nothing else, I hope this article persuades some l4-hackers readers that their efforts in helping me are not entirely going to waste!)

Prototyping an Architecture

In some spare time a couple of years ago, I started sketching out what I needed to consider when implementing such an architecture. Perhaps bizarrely, given the nature of the problem, my instinct was to prototype such an architecture in Python, running as a normal program on my GNU/Linux system. Now, Python is not exactly celebrated for its concurrency support, given the attention its principal implementation, CPython, has often had for a lack of scalability. However, whether or not the Python implementation supports running code in separate threads simultaneously, or whether it merely allows code in threads to take turns running sequentially, the most important thing was that I could have code happily running along being interrupted at potentially inconvenient moments by some other code that could conceivably ruin everything.

Fortunately, Python has libraries for threading and provides abstractions like semaphores. Such facilities would be all that was needed to introduce concurrency control in the different program components, allowing the simulation of the mechanisms involved in acquiring memory pages, populating them, issuing them to clients, and revoking them. It may sound strange to even consider simulating memory pages in Python, which operates at another level entirely, and the issuing of pages via a simulated interprocess communication (IPC) mechanism might seem unnecessary and subject to inaccuracy, but I found it to be generally helpful in refining my approach and even deepening my understanding of concepts such as flexpages, which I had applied in a limited way previously, making me suspect that I had not adequately tested the limits of my understanding.

Naturally, L4Re development is probably never done in Python, so I then had the task of reworking my prototype in C++. Fortunately, this gave me the opportunity to acquaint myself with the more modern support in the C++ standard libraries for threading and concurrency, allowing me to adopt constructs such as mutexes, condition variables and lock guards. Some of this exercise was frustrating: C++ is, after all, a lower-level language that demands more attention to various mundane details than Python does. It did suggest potential improvements to Python’s standard library, however, although I don’t really pay any attention to Python core development any more, so unless someone else has sought to address such issues, I imagine that Python will gain even more in the way of vanity features while such genuine deficiencies and omissions remain unrecognised.

Transplanting the Prototype

Upon reintroducing this prototype functionality into L4Re, I decided to retain the existing separation of functionality into various libraries within the L4Re build system – ones for filesystem clients, servers, IPC – also making a more general memory abstractions library, but I ultimately put all these libraries within a single package. At some point, it is this package that I will be making available, and I think that it will be easier to evaluate with all the functionality in a single bundle. The highest priority was then to test the mechanisms employed by the prototype using the same concurrency stress test program, this originally being written in Python, then ported to C++, having been used in my GNU/Linux environment to loosely simulate the conditions under L4Re.

This stress testing exercise eventually ended up working well enough, but I did experience issues with resource limits within L4Re as well as some concurrency issues with capability management that I should probably investigate further. My test program opens a number of files in a number of threads and attempts to read various regions of these files over and over again. I found that I would run out of capability slots, these tracking the references to other components available to a task in L4Re, and since each open file descriptor or session would require a slot, as would each thread, I had to be careful not to exceed the default budget of such slots. Once again, with help from another l4-hackers participant (Philipp), I realised that I wasn’t releasing some of the slots in my own code, but I also learned that above a certain number of threads, open files, and so on, I would need to request more resources from the kernel. The concurrency issue with allocating individual capability slots remains unexplored, but since I already wrap the existing L4Re functionality in my own library, I just decided to guard the allocation functionality with semaphores.

With some confidence in the test program, which only accesses simulated files with computed file content, I then sought to restore functionality accessing genuine files, these being the read-only files already exposed within L4Re along with ext2-resident files previously supported by my efforts. The former kind of file was already simulated in the prototype in the form of “host” files, although L4Re unhelpfully gives an arbitary (counter) value for the inode identifiers for each file, so some adjustments were required. Meanwhile, introducing support for the latter kind of file led me to update the bundled version of libext2fs I am using, refine various techniques for adapting the upstream code, introduce more functionality to help use libext2fs from my own code (since libext2fs can be rather low-level), and to consider the broader filesystem support architecture.

Here is the general outline of the paging mechanism supporting access to filesystem content:

Paging data structures

The data structures employed to provide filesystem content to programs.

It is rather simplistic, and I have practically ignored complicated page replacement algorithms. In practice, pages are obtained for use when a page fault occurs in a program requesting a particular region of file content, and fulfilment of this request will move a page to the end of a page queue. Any independent requests for pages providing a particular file region will also reset the page’s position in the queue. However, since successful accesses to pages will not cause programs to repeatedly request those pages, eventually those pages will move to the front of the queue and be reclaimed.

Without any insight into how much programs are accessing a page successfully, relying purely on the frequency of page faults, I imagine that various approaches can be adopted to try and assess the frequency of accesses, extrapolating from the page fault frequency and seeking to “bias” or “weight” pages with a high frequency of requests so that they move through the queue more slowly or, indeed, move through a queue that provides pages less often. But all of this is largely a distraction from getting a basic mechanism working, so I haven’t directed any more time to it than I have just now writing this paragraph!

Files and File Sessions

While I am quite sure that I ended up arriving at a rather less than optimal approach for the paging architecture, I found that the broader filesystem architecture also needed to be refined further as I restored the functionality that I had previously attempted to provide. When trying to support shared access to file content, it is appropriate to implement some kind of registry of open files, these retaining references to objects that are providing access to each of the open files. Previously, this had been done in a fairly simple fashion, merely providing a thread-safe map or dictionary yielding the appropriate file-related objects when present, otherwise permitting new objects to be registered.

Again, concurrency issues needed closer consideration. When one program requests access to a file, it is highly undesirable for another program to interfere during the process of finding the file, if it exists already, or creating the file, if it does not. Therefore, there must be some kind of “gatekeeper” for the file, enforcing sequential access to filesystem operations involving it and ensuring that any preparatory activities being undertaken to make a file available, or to remove a file, are not interrupted or interfered with. I came up with an architecture looking like this, with a resource registry being the gatekeeper, resources supporting file sessions, providers representing open files, and accessors transferring data to and from files:

Filesystem access data structures

The data structures employed to provide access to the underlying filesystem objects.

I became particularly concerned with the behaviour of the system around file deletion. On Unix systems, it is fairly well understood that one can “unlink” an existing file and keep accessing it, as long as a file descriptor has been retained to access that file. Opening a file with the same name as the unlinked file under such circumstances will create a new file, provided that the appropriate options are indicated, or otherwise raise a non-existent file error, and yet the old file will still exist somewhere. Any new file with the same name can be unlinked and retained similarly, and so on, building up a catalogue of old files that ultimately will be removed when the active file descriptors are closed.

I thought I might have to introduce general mechanisms to preserve these Unix semantics, but the way the ext2 filesystem works largely encodes them to some extent in its own abstractions. In fact, various questions that I had about Unix filesystem semantics and how libext2fs might behave were answered through the development of various test programs, some being normal programs accessing files in my GNU/Linux environment, others being programs that would exercise libext2fs in that environment. Having some confidence that libext2fs would do the expected thing leads me to believe that I can rely on it at least for some of the desired semantics of the eventual system.

The only thing I really needed to consider was how the request to remove a file when that file was still open would affect the “provider” abstraction permitting continued access to the file contents. Here, I decided to support a kind of deferred removal: if a program requested the removal of a file, the provider and the file itself would be marked for removal upon the final closure of the file, but the provider for the file would no longer be available for new usage, and the file would be unlinked; programs already accessing the file would continue to operate, but programs opening a file of the same name would obtain a new file and a new provider.

The key to this working satisfactorily is that libext2fs will assign a new inode identifier when opening a new file, whereas an unlinked file retains its inode identifier. Since providers are indexed by inode identifier, and since libext2fs translates the path of a file to the inode identifier associated with the file in its directory entry, attempts to access a recreated file will always yield the new inode identifier and thus the new file provider.

Pipes, Listings and Notifications

In the previous implementation of this filesystem functionality, I had explored some other aspects of accessing a filesystem. One of these was the ability to obtain directory listings, usually exposed in Unix operating systems by the opendir and readdir functions. The previous implementation sought to expose such listings as files, this in an attempt to leverage the paging mechanisms already built, but the way that libext2fs provides such listing information is not particularly compatible with the random-access file model: instead, it provides something more like an iterator that involves the repeated invocation of a callback function, successively supplying each directory entry for the callback function to process.

For this new implementation, I decided to expose directory listings via pipes, with a server thread accessing the filesystem and, in that callback function, writing directory entries to one end of a pipe, and with a client thread reading from the other end of the pipe. Of course, this meant that I needed to have an implementation of pipes! In my previous efforts, I did implement pipes as a kind of investigation, and one can certainly make this very complicated indeed, but I deliberately kept this very simple in this current round of development, merely having a couple of memory regions, one being used by the reader and one being used by the writer, with each party transferring the regions to the other (and blocking) if they find themselves respectively running out of content or running out of space.

One necessary element in making pipes work is that of coordinating the reading and writing parties involved. If we restrict ourselves to a pipe that will not expand (or even not expand indefinitely) to accommodate more data, at some point a writer may fill the pipe and may then need to block, waiting for more space to become available again. Meanwhile, a reader may find itself encountering an empty pipe, perhaps after having read all available data, and it may need to block and wait for more content to become available again. Readers and writers both need a way of waiting efficiently and requesting a notification for when they might start to interact with the pipe again.

To support such efficient blocking, I introduced a notifier abstraction for use in programs that could be instantiated and a reference to such an instance (in the form of a capability) presented in a subscription request to the pipe endpoint. Upon invoking the wait operation on a notifier, the notifier will cause the program (or a thread within a program) to wait for the delivery of a notification from the pipe, this being efficient because the thread will then sleep, only to awaken if a message is sent to it. Here is how pipes make use of notifiers to support blocking reads and writes:

Communication via pipes employing notifications

The use of notifications when programs communicate via a pipe.

A certain amount of plumbing is required behind the scenes to support notifications. Since programs accessing files will have their own sessions, there needs to be a coordinating object representing each file itself, this being able to propagate notification events to the users of the file concerned. Fortunately, I introduced the notion of a “provider” object in my architecture that can act in such a capacity. When an event occurs, the provider will send a notification to each of the relevant notifier endpoints, also providing some indication of the kind of event occurring. Previously, I had employed L4Re’s IRQ (interrupt request) objects as a means of delivering notifications to programs, but these appear to be very limited and do not allow additional information to be conveyed, as far as I can tell.

One objective I had with a client-side notifier was to support waiting for events from multiple files or streams collectively, instead of requiring a program to have threads that wait for events from each file individually, thus attempting to support the functionality provided by Unix functions like select and poll. Such functionality relies on additional information indicating the kind of event that has occurred. The need to wait for events from numerous sources also inverts the roles of client and server, with a notifier effectively acting like a server but residing in a client program, waiting for messages from its clients, these typically residing in the filesystem server framework.

Testing and Layering

Previously, I found that it was all very well developing functionality, but only through a commitment to testing it would I discover its flaws. When having to develop functionality at a number of levels in a system at the same time, testing generally starts off in a fairly limited fashion. Initially, I reintroduced a “block” server that merely provides access to a contiguous block of data, this effectively simulating storage device access that will hopefully be written at some point, and although genuine filesystem support utilises this block server, it is reassuring to be able to know whether it is behaving correctly. Meanwhile, for programs to access servers, they must send requests to those servers, assisted by a client library that provides support for such interprocess communication at a fairly low level. Thus, initial testing focused on using this low-level support to access the block server and verify that it provides access to the expected data.

On top of the lowest-level library functionality is a more usable level of “client” functions that automates the housekeeping needing to be done so that programs may expect an experience familiar to that provided by traditional C library functionality. Again, testing of file operations at that level helped to assess whether library and server functionality was behaving in line with expectations. With some confidence, the previously-developed ext2 filesystem functionality was reintroduced and updated. By layering the ext2 filesystem server on top of the block server, the testing activity is actually elevated to another level: libext2fs absolutely depends on properly functioning access to the block device; otherwise, it will not be able to perform even the simplest operations on files.

When acquainting myself with libext2fs, I developed a convenience library called libe2access that encapsulates some of the higher-level operations, and I made a tool called e2access that is able to populate a filesystem image from a normal program. This tool, somewhat reminiscent of the mtools suite that was popular at one time to allow normal users to access floppy disks on a system, is actually a fairly useful thing to have, and I remain surprised that there isn’t anything like it in common use. In any case, e2access allows me to populate images for use in L4Re, but I then thought that an equivalent to it would also be useful in L4Re for testing purposes. Consequently, a tool called fsaccess was created, but unlike e2access it does not use libe2access or libext2fs directly: instead, it uses the “client” filesystem library, exercising filesystem access via the IPC system and filesystem server architecture.

Ultimately, testing will be done completely normally using C library functions, these wrapping the “client” library. At that point, there will be no distinction between programs running within L4Re and within Unix. To an extent, L4Re already supports normal Unix-like programs using C library functions, this being particularly helpful when developing all this functionality, but of course it doesn’t support “proper” filesystems or Unix-like functionality in a particularly broad way, with various common C library or POSIX functions being stubs that do nothing. Of course, all this effort started out precisely to remedy these shortcomings.

Paging, Loading and Running Programs

Beyond explicitly performed file access, the next level of mutually-reinforcing testing and development came about through the simple desire to have a more predictable testing environment. In wanting to be able to perform tests sequentially, I needed control over the initiation of programs and to be able to rely on their completion before initiating successive programs. This may well be possible within L4Re’s Lua-based scripting environment, but I generally find the details to be rather thin on the ground. Besides, the problem provided some motivation to explore and understand the way that programs are launched in the environment.

There is some summary-level information about how programs (or tasks) are started in L4Re – for example, pages 41 onwards of “Memory, IPC, and L4Re” – but not much in the way of substantial documentation otherwise. Digging into the L4Re libraries yielded a confusing array of classes and apparent interactions which presumably make sense to anyone who is already very familiar with the specific approach being taken, as well as the general techniques being applied, but it seems difficult for outsiders to distinguish between the specifics and the generalities.

Nevertheless, some ideas were gained from looking at the code for various L4Re components including Moe (the root task), Ned (the init program), the loader and utilities libraries, and the oddly-named l4re_kernel component, this actually providing the l4re program which itself hosts actual programs by providing the memory management functionality necessary for those programs to work. In fact, we will eventually be looking at a solution that replicates that l4re program.

A substantial amount of investigation and testing took place to explore the topic. There were a number of steps required to initialise a new program:

  1. Open the program executable file and obtain details of the different program segments and the program’s start address, this requiring some knowledge of ELF binaries.
  2. Initialise a stack for the program containing the arguments to be presented to it, plus details of the program’s environment. The environment is of particular concern.
  3. Create a task for the program together with a thread to begin execution at the start address, setting the stack pointer to the appropriate place in where the stack should be made available.
  4. Initialise a control block for the thread.
  5. Start the thread. This should immediately generate a page fault because the memory at the start address is not yet available within the task.
  6. Service page faults for the program, providing pages for the program code – thus resolving that initial page fault – as well as for the stack and other regions of memory.

Naturally, each of these steps entails a lot more work than is readily apparent. Particularly the last step is something of an understatement in terms of what is required: the mechanism by which demand paging of the program is to be achieved.

L4Re provides some support for inspecting ELF binaries in its utilities library, but I found the ELF specification to be very useful in determining the exact purposes of various program header fields. For more practical guidance, the OSDev wiki page about ELF provides an explanation of the program loading process, along with how the different program segments are to be applied in the initialisation of a new program or process. With this information to hand, together with similar descriptions in the context of L4Re, it became possible to envisage how the address space of a new program might be set up, determining which various parts of the program file might be installed and where they might be found. I wrote some test programs, making some use of the structures in the utilities library, but wrote my own functions to extract the segment details from an ELF binary.

I found a couple of helpful resources describing the initialisation of the program stack: “Linux x86 Program Start Up” and “How statically linked programs run on Linux”. These mainly demystify the code that is run when a program starts up, setting up a program before the user’s main function is called, giving a degree of guidance about the work required to set up the stack so that such code may perform as expected. I was, of course, also able to study what the various existing L4Re components were doing in this respect, although I found the stack abstractions used to be idiomatic C/C++ bordering on esoteric. Nevertheless, the exercise involves obtaining some memory that can eventually be handed over to the new program, populating that memory, and then making it available to the new program, either immediately or on request.

Although I had already accumulated plenty of experience passing object capabilities around in L4Re, as well as having managed to map memory between tasks by sending the appropriate message items, the exact methods of setting up another task with memory and capabilities had remained mysterious to me, and so began another round of experimentation. What I wanted to do was to take a fairly easy route to begin with: create a task, populate some memory regions containing the program code and stack, transfer these things to the new task (using the l4_task_map function), and then start the thread to run the program, just to see what happened. Transferring capabilities was fairly easily achieved, and the L4Re libraries and frameworks do employ the equivalent of l4_task_map in places like the Remote_app_model class found in libloader, albeit obfuscated by the use of the corresponding C++ abstractions.

Frustratingly, this simple approach did not seem to work for the memory, and I could find only very few cases of anyone trying to use l4_task_map (or its equivalent C++ incantations) to transfer memory. Despite the memory apparently being transferred to the new task, the thread would immediately cause a page fault. Eventually, a page fault is what we want, but that would only occur because no memory would be made available initially, precisely because we would be implementing a demand paging solution. In the case of using l4_task_map to set up program memory, there should be no new “demand” for pages of such memory, this demand having been satisfied in advance. Nevertheless, I decided to try and get a page fault handler to supply flexpages to resolve these faults, also without success.

Having checked and double-checked my efforts, an enquiry on the l4-hackers list yielded the observation that the memory I had reserved and populated had not been configured as “executable”, for use by code in running programs. And indeed, since I had relied on the plain posix_memalign function to allocate that memory, it wasn’t set up for such usage. So, I changed my memory allocation strategy to permit the allocation of appropriately executable memory, and fortunately the problem was solved. Further small steps were then taken. I sought to introduce a region mapper that would attempt to satisfy requests for memory regions occurring in the new program, these occurring because a program starting up in L4Re will perform some setting up activities of its own. These new memory regions would be recognised by the page fault handler, with flexpages supplied to resolve page faults involving those regions. Eventually, it became possible to run a simple, statically-linked program in its own task.

Supporting program loading with an external page fault handler

When loading and running a new program, an external page fault handler makes sure that accesses to memory are supported by memory regions that may be populated with file content.

Up to this point, the page fault handler had been external to the new task and had been supplying memory pages from its own memory regions. Requests for data from the program file were being satisfied by accessing the appropriate region of the file, this bringing in the data using the file’s paging mechanism, and then supplying a flexpage for that part of memory to the program running in the new task. This particular approach compels the task containing the page fault handler to have a memory region dedicated to the file. However, the more elegant solution involves having a page fault handler communicating directly with the file’s pager component which will itself supply flexpages to map the requested memory pages into the new task. And to be done most elegantly, the page fault handler needs to be resident in the same task as the actual program.

Putting the page fault handler and the actual program in the same task demanded some improvements in the way I was setting up tasks and threads, providing capabilities to them, and so on. Separate stacks need to be provided for the handler and the program, and these will run in different threads. Moving the page fault handler into the new task is all very well, but we still need to be able to handle page faults that the “internal” handler might cause, so this requires us to retain an “external” handler. So, the configuration of the handler and program are slightly different.

Another tricky aspect of this arrangement is how the program is configured to send its page faults to the handler running alongside it – the internal handler – instead of the one servicing the handler itself. This requires an IPC gate to be created for the internal handler, presented to it via its configuration, and then the handler will bind to this IPC gate when it starts up. The program may then start up using a reference to this IPC gate capability as its “pager” or page fault handler. You would be forgiven for thinking that all of this can be quite difficult to orchestrate correctly!

Configuring the communication between program and page fault handler

An IPC gate must be created and presented to the page fault handler for it to bind to before it is presented to the program as its “pager”.

Although I had previously been sending flexpages in messages to satisfy map requests, the other side of such transactions had not been investigated. Senders of map requests will specify a “receive window” to localise the placement of flexpages returned from such requests, this being an intrinsic part of the flexpage concept. Here, some aspects of the IPC system became more prominent and I needed to adjust the code generated by my interface description language tool which had mostly ignored the use of message buffer registers, employing them only to control the reception of object capabilities.

More testing was required to ensure that I was successfully able to request the mapping of memory in a particular region and that the supplied memory did indeed get mapped into the appropriate place. With that established, I was then able to modify the handler deployed to the task. Since the flexpage returned by the dataspace (or resource) providing access to file content effectively maps the memory into the receiving task, the page fault handler does not need to explicitly return a valid flexpage: the mapping has already been done. The semantics here were not readily apparent, but this approach appears to work correctly.

The use of an internal page fault handler with a new program

An internal page fault handler satisfies accesses to memory from the program running in the same task, providing it with access to memory regions that may be populated with file content.

One other detail that proved to be important was that of mapping file content to memory regions so that they would not overlap somehow and prevent the correct region from being used to satisfy page faults. Consider the following regions of the test executable file described by the readelf utility (with the -l option):

  Type           Offset             VirtAddr           PhysAddr
                 FileSiz            MemSiz              Flags  Align
  LOAD           0x0000000000000000 0x0000000001000000 0x0000000001000000
                 0x00000000000281a6 0x00000000000281a6  R E    0x1000
  LOAD           0x0000000000028360 0x0000000001029360 0x0000000001029360
                 0x0000000000002058 0x0000000000008068  RW     0x1000

Here, we need to put the first region providing the program code at a virtual address of 0x1000000, having a size of at least 0x281a6, populated with exactly that amount of content from the file. Meanwhile, we need to put the second region at address 0x1029360, having a size of 0x8068, but only filled with 0x2058 bytes of data. Both regions need to be aligned to addresses that are multiples of 0x1000, but their contents must be available at the stated locations. Such considerations brought up two apparently necessary enhancements to the provision of file content: the masking of content so that undefined areas of each region are populated with zero bytes, this being important in the case of the partially filled data region; the ability to support writes to a region without those writes being propagated to the original file.

The alignment details help to avoid the overlapping of regions, and the matter of populating the regions can be managed in a variety of ways. I found that since file content was already being padded at the extent of a file, I could introduce a variation of the page mapper already used to manage the population of memory pages that would perform such padding at the limits of regions defined within files. For read-only file regions, such a “masked” page mapper would issue a single read-only page containing only zero bytes for any part of a file completely beyond the limits of such regions, thus avoiding the allocation of lots of identical pages. For writable regions that are not to be committed to the actual files, a “copied” page mapper abstraction was introduced, this providing copy-on-write functionality where write accesses cause new memory pages to be allocated and used to retain the modified data.

Some packaging up of the functionality into library routines and abstractions was undertaken, although as things stand more of that still needs to be done. I haven’t even looked into support for dynamic library loading, nor am I handling any need to extend the program stack when that is necessary, amongst other things, and I also need to make the process of creating tasks as simple as a function call and probably also expose the process via IPC in order to have a kind of process server. I still need to get back to addressing the lack of convenient support for the sequential testing of functionality.

But I hope that much of the hard work has now already been done. Then again, I often find myself climbing one particular part of this mountain, thinking that the next part of the ascent will be easier, only to find myself confronted with another long and demanding stretch that brings me only marginally closer to the top! This article is part of a broader consolidation process, along with writing some documentation, and this process will continue with the packaging of this work for the historical record if nothing else.

Conclusions and Reflections

All of this has very much been a learning exercise covering everything from the nuts and bolts of L4Re, with its functions and abstractions, through the design of a component architecture to support familiar, intuitive but hard-to-define filesystem functionality, this requiring a deeper understanding of Unix filesystem behaviour, all the while considering issues of concurrency and resource management that are not necessarily trivial. With so much going on at so many levels, progress can be slow and frustrating. I see that similar topics and exercises are pursued in some university courses, and I am sure that these courses produce highly educated people who are well equipped to go out into the broader world, developing systems like these using far less effort than I seem to be applying.

That leads to the usual question of whether such systems are worth developing when one can just “use Linux” or adopt something already under development and aimed at a particular audience. As I note above, maybe people are routinely developing such systems for proprietary use and don’t see any merit in doing the same thing openly. The problem with such attitudes is that experience with the development of such systems is then not broadly cultivated, the associated expertise and the corresponding benefits of developing and deploying such systems are not proliferated, and the average user of technology only gets benefits from such systems in a limited sense, if they even encounter them at all, and then only for a limited period of time, most likely, before the products incorporating such technologies wear out or become obsolete.

In other words, it is all very well developing proprietary systems and celebrating achievements made decades ago, but having reviewed decades of computing history, it is evident to me that achievements that are not shared will need to be replicated over and over again. That such replication is not cutting-edge development or, to use the odious term prevalent in academia, “novel” is not an indictment of those seeking to replicate past glories: it is an indictment of the priorities of those who commercialised them on every prior occasion. As mundane as the efforts described in this article may be, I would hope that by describing them and the often frustrating journey involved in pursuing them, people may be motivated to explore the development of such systems and that techniques that would otherwise be kept as commercial secrets or solutions to assessment exercises might hopefully be brought to a broader audience.

Sunday, 26 June 2022

FSFE Information stand at Veganmania MQ 2022

FSFE Information stand at Veganmania MQ 2022

FSFE information stall on Veganmania MQ 2022

From 3rd to 6th June 2022 happened the Veganmania street festival at the Museumsquartier in Vienna. Despite not happening for two years due to the Corona pandemic this over the years has developed into the biggest vegan street event in Europe with tens of thousands visitors everey day. Of course there have been plenty of food stands with all kinds of climate and animal friendly delicious meals but the festival had also many stands for buying other stuff. In addition many NGO tents were there too to inform about important issues and their work.

Like already tradition for many years also the local volunteers group manned an FSFE information stand from Friday noon until Monday night. It was exhausting because only two volunteers manned the stand. But we both stayed there the whole time and the interest of so many people had confirmed once more how well we optimized our information material assortment without losing the ability to bring everything at once using just a bicycle.

The front of our stall was covered with a big FSFE banner while the sides are used for posters explaining the four freedoms and GnuPG email encryption. (We very soon need to replace our old posters with more durable water resistant paper since the old one has gotten rather worn down and doesn’t look very sleek any more with all the tape pieces it is hold together.) In addition we use a small poster stand we built ourselves with just two wooden plates and a hinge. This was of left over material from a DIY center. Unfortunately this time we didn’t have any wall behind us where we would have been allowed to put any posters or banners on.

Also our usual leaflet stack has proven to be very handy. Since most people talking to us are not yet familiar with free software the most important piece is probably our quick overview of 10 different free software distributions. It is just printed in black and white in a copy shop but on thick striking orange paper. This way of production is rather important because it is very easy and not to costly to quickly print out more if we need to. It also allows us to adapt it often for new developments because we don’t have a big stack which might become outdated. Experience also shows that the generous layout with enough border space to write ad-hoc or very personalised links onto the matte paper comes in handy in almost all conversations. The thick paper it is printed on gives it also a much more valuable touch.

Less tech savvy people find a good first information in our local version of the Freedom leaflet which is basically RMS book Free Software, Free Society distilled into a leaflet. It combines this basic conceptual introduction using tools as comparison to software with some practical internet links to things like privacy friendly search engines, and a searchable free software catalogue.

Our leaflets Free Your Android and the one on GPG email encryption find much interest too. And of course many people like taking the There is no cloud … just other peoples computers and some other creative stickers with them. Our beautiful leaflet on free games attracts people too but this time after the event someone reported back that our link led him to kind of a Japanese porn page. After some back and fourth we discovered he tried to type a capital o instead of a zero in the link: play0ad.com. We did use this folder for years and he was the first to report back that the link didn’t work for him. Unfortunately we still have many of those games leaflets. I suspect in future we should point out that the link contains a zero and no capital letter.

We also consider putting together a more detailed walk-through for installing free software on a computer and we often hinted people to check out different distributions by visiting distrotest.net.

Normally we don’t even have time to get something to eat because we usually talk to people until after all other stands have closed up. But because we do not have a tent we needed to protect our material on two evenings from the storm in the night. So we closed up an hour early (about 9pm) and could still get some delicious snacks.

It has been a very busy and productive information stand with lots of constructive talks and we are looking forward to the next information stall on the Veganmania in August on the Danube island. Hopefully we have managed to renew our posters and information material by then.

Recent Readings of Ada & Zangemann

In June, I did several readings of "Ada & Zangeamnn - A tale of software, skateboards, and raspberry ice cream" in German.

The first one was on 2 June at the public library in Cologne Porz, which also acts as a makerspace. There I was reading the book to visitors of the public library, which resulted in a quite diverse audience considering age and technical background.

At this reading, I also for the first time met the people from O'Reilly with whom I worked in the last months: my editor Ariane Hesse and Corina Pahrmann, who is doing the marketing for the book. So I had the opportunity to thank the two of them in person for all the great work they did to write and market the book.

Here is the nice view while preparing for the reading next to the library at the Rhine.

The book "Ada & Zangemann" outside the public library in Cologne Porz before a reading.

On 9 June, I was doing a reading at the re:publica 22 conference in Berlin. The reading took place in the fablab bus, which is a mobile hackerspace to reach young people in areas which do not have local hackerspaces. Briefly before the start of the reading, Volker Wissing, the German Federal Minister for Digital Affairs visited the bus. We had a quick chat with the children and briefly talked with him about the book, tinkering, and the self-determined use of software. As he was interested in the book, I gave him one of my copies I had with me.

After the reading there was a longer discussion with the children and adults about ideas for another book on the topic.

Volker Wissing, German Federal Minister for Digital Affairs and Matthias Kirschner with the book "Ada&Zangemann" before the reading at re:publica in the fablab bus

Finally, on 24 June, on the Germany's federal digital day, the city of Offenburg invited me to do a reading of the book. In the beginning it was planned to read the book towards 2 school classes. But over the time more and more teachers contacted the organisers and expressed interest to also participate in the reading with their classes. In the end, the reading took place in the largest cinema room in the city of Offenburg with over 150 third-graders from different schools in Offenburg and the Chief Mayor Marco Steffens was doing the introduction.

As with the other readings I did until now, the part I enjoy the most, are the questions and comments by children. Although I also enjoy the discussions with grown-ups, it is so encouraging to see how children think about the topics of the book, what ideas and plans they have. In the end the organisers had to stop the discussion, because the children had to go back to school. I was also amazed how concentrated the children were following the story for the 35 minutes of the reading before we had the discussion.

The room before the reading, which can fit 365 people, and which offered enough room to have some space between the different classes.

The largest cinema room in Offenburg before the reading of "Ada&Zangemann" to over 150 3rd graders from Offenburg schools.

Some stickers and bookmarks I brought with me in front of the room:

Ada and Zangemann stickers, FSFE Free Software stickers, and Ada&Zangemann bookmarks on a desk in the cinema entrance area before the reading of the book in Offenburg

Chief Mayor Marco Steffens was doing an introduction before my reading:

Chief Mayor Marco Steffens and Matthias Kirschner in the front of the largest cinema room in Offenburg before the reading of "Ada&Zangemann" in front of over 150 3rd graders from Offenburg schools.

Myself at the reading, while showing one of my favourite illustrations by Sandra Brandstätter:

Matthias Kirschner reading "Ada&Zangemann" to over 150 3rd graders from Offenburg in the largest cinema room in Offenburg

After the reading I joined the people from the local hackerspace Section 77 at their booth for the Digital Day and had further interesting discussions with them and other people attending the event.

I am already looking forward to future readings of the book and discussions with people, young and old, about the topics of the book.

Sunday, 19 June 2022

Reproducible Builds – Telling of a Debugging Story

Reproducibility is an important tool to empower users. Why would a user care about that? Let me elaborate.

For a piece of software to be reproducible means that everyone with access to the software’s source code is able to build the binary form of it (e.g. the executable that gets distributed). What’s the matter? Isn’t that true for any project with accessible source code? Not at all. Reproducibility means that the resulting binary EXACTLY matches what gets distributed. Each and every bit and byte of the binary is exactly the same, no matter on which machine the software gets built.

The benefit of this is that on top of being able to verify that the source code doesn’t contain any spyware or unwanted functionality, the user now is also able to verify that the distributable they got e.g. from an app store has no malicious code added into it. If for example the Google Play Store would inject spyware into an application submitted by a developer, the resulting binary that gets distributed via the store would be different from the binary built by the user.

Why is this important? Well, Google already requires developers to submit their signing keys, so they can modify software releases after the fact. Now, reproducibility becomes a tool to verify that Google did not tamper with the binaries.

I try to make PGPainless build reproducible as well. A few months ago I added some lines to the build script which were supposed to make the project reproducible by using static file modification dates, as well as a deterministic file order in the JAR archive.

    // Reproducible Builds
    tasks.withType(AbstractArchiveTask) {
        preserveFileTimestamps = false
        reproducibleFileOrder = true
    }

It took a bit more tinkering back then to get it to work though, as I was using a Properties file written to disk during build time to access the libraries version during runtime, and it turns out that the default Writer for Properties files includes the current time and date in a comment line. This messed up reproducibility, as now that file would be different each time the project got built. I eventually managed to fix that though by writing the file myself using a custom Writer. When I tested my build script back then, both my laptop and my desktop PC were able to build the same exact JAR archive. I thought I was done with reproducibility.

Today I drafted another release for PGPainless. I noticed that my table of reprodubile build hashes for each release was missing the checksums for some recent releases. I quickly checked out those releases, computed the checksums and updated the table. Then I randomly chose the 1.2.2 release and decided to check if the checksum published to maven central still matches my local checksum. And to my surprise it didn’t! Was this a malicious act from Maven Central?

Release 1.2.2 was created while I was on my Journey through Europe, so I had used my Laptop to draft the release. So the first thing I did was grab the laptop, checkout the releases source in git and build the binaries. Et voila, I got checksums matching those on Maven Central. So it wasn’t an attack, but for some reason my laptop was producing different binaries than my main machine.

I transferred the “faulty” binaries over to my main machine to compare them in more detail. First I tried Meld, which is a nice graphical diff tool for text files. It completely froze though, as apparently it is not so great for comparing binary files.

Next I decompressed both binaries and compared the resulting folders. Meld did not report any differences, the directories matched perfectly. What the heck?

Next I tried diff 1.jar 2.jar which very helpfully displayed the message “Binary files 1.jar and 2.jar are different”. Thanks for nothing. After some more research, I found out that you could use the flag --text to make diff spit out more details. However, the output was not really helpful either, as the binary files were producing lots of broken output in the command line.

I did some research and found that there were special diff tools for JAR files. Checking out one project called jardiff looked promising initially, but eventually it reported that the files were identical. Hm…

Then I opened both files in ghex to inspect their byte code in hexadecimal. By chance I spotted some differences near the end of the files.

The same spots in the other JAR file look identical, but the A4 got replaced with B4 in the other file. Strange. I managed to find another command which found and displayed all places in the JAR files which had mismatches:

$ cmp -l 1.jar 2.jar | gawk '{printf "%08X %02X %02X\n", $1, strtonum(0$2), strtonum(0$3)}'
00057B80 ED FD
00057BB2 ED FD
00057BEF A4 B4
00057C3C ED FD
00057C83 A4 B4
00057CDE A4 B4
00057D3F A4 B4
00057DA1 A4 B4
...

Weird, in many places ED got changed to DF and A4 got changed into B4 in what looked like some sort of index near the end of the JAR file. At this point I was sure that my answers would unfortunately lay within the ZIP standard. Why ZIP? For what I understand, JAR files are mostly ZIP files. Change the file ending from .jar to .zip and any standard ZIP tool will be able to extract your JAR file. There are probably nuances, but if there are, they don’t matter for the sake of this post.

The first thing I did was to check which versions of zip were running on both of my machines. To my surprise they matched and since I wasn’t even sure if JAR files would be generated using the standard zip tool, this was a dead end for me. Searching the internet some more eventually lead me to this site describing the file structure for PKZIP files. I originally wasn’t sure if PKZIP was what I was looking for, but I had seen the characters PK when investigating the hex code before, so I gave the site a try.

Somewhere I read The signature of the local file header. This is always '\x50\x4b\x03\x04'. Aha! I just had to search for the octets 50 4B 03 04 in my file! It should be in approximation to the bytes in question so I just had to read backwards until I found them. Aaaand: 50 4B 01 02 Damn. This wasn’t it. But 01 02 looks so suspiciously non-random, maybe I oversaw something? Let’s continue to read on the website. Aha! The Central directory file header. This is always "\x50\x4b\x01\x02". The section even described its format in a nice table. Now I just had to manually count the octets to determine what exactly differed between the two JAR files.

It turns out that the octets 00 00 A4 81 I had observed to change in the file were labeled as “external file attributes; host-system dependent”. Again, not very self-explanatory but something I could eventually throw into a search engine.

Some post on StackOverflow suggested that this had to do with file permissions. Apparently ZIP files (and by extension also JAR files) would use the external attributes field to store read and write permissions of the files inside the archive. Now the question turned into: “How can I set those to static values?”.

After another hour of researching the internet with permutations of the search terms jar, archive, file permissions, gradle, external attributes, zip I finally stumbled across a bug report in another software project that talked about the same exact issue I had; differing jar files on different machines. In their case, their CI would build the jar file in a docker container and set different file permissions than the locally built file, hence a differing JAR archive.

In the end I found a bug report on the gradle issue tracker, which exactly described my issue and even presented a solution: dirMode and fileMode could be used to statically set permissions for files and directories in the JAR archive. One of the comments in that issue reads:

We should update the documentation.

The bug report was from 3 years ago…

Yes, this would have spared me from 3h of debugging 😉
But I probably would also not have gone onto this little dive into the JAR/ZIP format, so in the end I’m not mad.

Finally, my build script now contains the following lines to ensure reproducibility, no matter the file permissions:

    // Reproducible Builds
    tasks.withType(AbstractArchiveTask) {
        preserveFileTimestamps = false
        reproducibleFileOrder = true

        dirMode = 0755
        fileMode = 0644
    }

Finally my laptop and my desktop machine produce the same binaries again. Hopefully this post can help others to fix their reproducibility issue.

Happy Hacking!

Saturday, 18 June 2022

The KDE Qt5 Patch Collection has been rebased on top of Qt 5.15.5

 

Commit: https://invent.kde.org/qt/qt/qt5/-/commit/2ab84b12b09a6c642d7c16de392d85bbcd49bb6a

 

Commercial release announcement: https://www.qt.io/blog/commercial-lts-qt-5.15.5-released 


OpenSource release announcement: https://lists.qt-project.org/pipermail/development/2022-June/042659.html

 

I want to personally extend my gratitude to the Commercial users of Qt for beta testing Qt 5.15.5 for the rest of us.

 

The Commercial Qt 5.15.5 release introduced some bugs that have later been fixed. Thanks to that, our Patchset Collection has been able to incorporate the reverts for those  bugs [1] [2] [3] [4] and the Free Software users will never be affected by those!




Thursday, 02 June 2022

Europe Trip Journal – Entry 30: Thank You for Traveling with Deutsche Bahn

It is time to go home. This morning I woke up at 9:20 and decided to get some breakfast. Afterwards I returned to my room and was greeted by my room mate H. who told me that the cleaning personnel wanted to do their job and that checkout was supposed to be done at 10:00. It was already 10:20, so I hurried and quickly collected all my belongings.

Then I went to the reception to perform the check-out and afterwards to the community area to spend my remaining hour until the train would go. H. also came down at some point and we later said farewell. Then it was time for me to go to the station.

On my way I was sunken in thoughts about political events. As an European it is my opinion that tragic events such as the Uvalde shooting are the result of ludicrous gun ownership regulations. Americans often state personal safety as the reason to why they would need to have a gun. However, if everyone around you has a gun, everyone could shoot you at any second! I personally believe that this is part of the reason why American cops are so quick to open fire. They know that it is very likely that the other person has a gun and therefore presents a threat.

So the obvious solution is less weapons. However, how does this solution apply to Russias war against Ukraine? Isn’t it double-standard to call for less weapons in the USA, while at the same time demanding delivery of weapons to Ukraine?

I’d argue that those are totally different situations. The American people are not at war with another. Putin’s Russia however ruthlessly assaulted the Ukrainian people and as Europeans we need to support them. And in my opinion the best way to do so is by enabling the Ukraine to defend itself, therefore we need to deliver heavy weaponry. In Germany many people have the mantra of “Never again War”. We as Germans must never again go to war. However, this statement is not precise enough. Germany must never again participate in an offensive war. The proposition that Ukraine should surrender parts of their country to Russia as part of some compromise (as disgustingly phrased by the Emma magazine) in order to settle the war is arrogant and short-sighted. Alice Schwarzer should know better about victim blaming.

Those are hard to swallow pills, but given that Putin has not stopped his attack after nearly the whole world told him to stop shows that there is no other way to stop him than a sufficiently weaponized Ukraine and the message that the west will not leave our friends alone. I arrived at this conclusion and the train station at roughly the same time.

My ICE was waiting at the platform and boarding hat already begun. The DB logo on the train made it strangely clear that my trip was over. I got to my seat and soon the train started rolling. 3 hours or so later, the conductor announced Mannheim, the stop where I was supposed to switch to another ICE to Münster. Up to this point everything had gone smooth and the train was on time. Then it stopped – and the stop was not Mannheim.

The conductor was struggling to tell the guests in German, English and French that this was not our destined stop and that they should keep the doors closed please. This was just an unscheduled, unplanned halt and we would soon continue. So we waited. At some point the train started moving again, only to stop once again a few meters later. My 20 minutes buffer for switching trains was melting away.

Finally the train stopped in Mannheim, with 7 minutes to switch, so I quickly left the train and started looking for the platform I was supposed to go to. The train was scheduled to depart from platform 4. Arriving there, I was greeted by the digital sign reading that the train was instead stopping at platform 2A. Okay. I went down the underpass again and over to platform 2.

Track 2 was still occupied by an RE train. A loudspeaker voice announced that the train was incoming and was expected to stop at platform 2A once again. While the A-section was free, I doubt that the whole length of the train would fit there. Suddenly the sign changed once again, announcing that the train would now again stop on platform 4. What a chaos! So I ran back, as I could already see the train drifting into the station.

Finally I got in and quickly found an unreserved seat. *phew*. And then the conductor announced that there was an issue with the signaling at the station, which meant that the train would depart with a delay. In the end they diverted the train, resulting in a total of 30 minutes of delay. You gotta love the German railway.

After 3 more hours and some announcements of an annoyed train driver we came close to Münster. Outside my window it had began to rain and there was a little thunderstorm going on. This was the first time during my trip that I experienced bad weather. It only had rained briefly while I was taking a shower in Madrid and in Marseille there had been literally a few droplets while I was there, so facing the thunderstorm was another strange signal that my trip was over.

Then the train stopped and I got off. 20 minutes later I entered the bus that would bring me home. I bought one of those new fancy 9€ tickets and the bus took me home. This was the end of my journey. I felt a bit like Bilbo Baggins, coming back to the Shire, carrying a bag and lots of stories to tell.

All in all my trip took 31 days in which I traveled 5161km by train, divided up into 20 separate train rides which took a total of 43h of travel time. I visited a total of 4 foreign countries and the farthest distance to home was 1546km.

During my journey I learned that I’m not too bad at being a foreigner. Its easier for me to get around in a city where nobody knows me than I had previously thought. To me, this first journey was a complete success. In hindsight, I have made mistakes along the way, for example I haven’t really explored Nantes at all. I probably should have spent some more time researching cities before going there, and maybe spending more time per stop is better than rushing from station to station. However, this is okay for me.

All in all this trip has taught me that I like traveling and that traveling is easier than I thought. Surely I will do more trips in the future, maybe even together with a friend.

Wednesday, 01 June 2022

KDE Gear 22.08 release schedule finalized

This is the release schedule the release team agreed on

https://community.kde.org/Schedules/KDE_Gear_22.08_Schedule

Dependency freeze is in five weeks (July 7) and feature freeze one after that. 

 

Get your stuff ready!

Tuesday, 31 May 2022

Europe Trip Journal – Entry 29: Prepare for Reentry

Under some past blog post, Paul Boddie had recommended visiting the Musée de l’Aire et de l’Espace to see some life-sized rockets. Huge thanks for this excellent recommendation!

Today I got up a bit earlier than yesterday, meaning I had to actually pay for breakfast again ;). Unfortunately the coffee was already running out, so I only got a single cup which had to do it. Milk was also empty, so no cereals this time. Well, there was yogurt as a worthy replacement.

Afterwards I left for the metro to the train station Charles de Gaulle Étoile. From there I was supposed to take line B which would bring me within walking distance to the museum. The metro system in Paris works with gates which would only open if you have a valid ticket. Sometimes you’d also need to present the ticket when exiting the station, but normally the exit gates would just open by motion detection.

To get to the platform for line B, I had to once again cross another gate. I presented the metro ticket I had used to get to the train station and the gate opened. Nice. Apparently this ride would count as one single trip.

The ride on the B train was rather short. Only about 10-15 minutes later I exited the train at my destination. I went to the exit and there was another gate. I inserted my ticket, but the gate did not open. I tried again, but the gate only sounded an alarm. Another try, this time with the ticket flipped around, but no success either. The display read something along the lines of “non valable”, invalid.

I looked around to see if there was personnel somewhere. Next to the gate stood a woman in uniform, but she explained that she worked for the airport and could not help me. I walked back to the station building and tried to get in to talk to a person at the counter, however to get there I would need to pass another gate. Fantastic.

Being stuck in a deadlock, I walked back to the gate I wanted to pass. There was a box with a button and a telephone symbol. I pressed the button and after some very loud ringing a person answered the phone. I explained that the gate would not let me pass. After describing the type of ticket, the person told me that my ticket was not valid outside of Paris. Oopsie. The man agreed to send an agent to my position. If nobody arrived withing 5 minutes I should call again.

Some minutes later, nobody had showed up and I was fed up with waiting. While I had stood next to the gate waiting, multiple people had either jumped over it or simply had closely followed others passing the gate. So finally I resolved my issue on my own in some way (details are not important) and continued my way to the museum.

After crossing over a highway bridge, I could suddenly see some white tips reaching over some distant buildings. Exciting! I finally reached the building and eventually found the entrance. I decided to be patient and first visit the “l’Aire”-part of the museum. They actually had a lot of models and even real planes there. One building focused on the pioneers of aviation and displayed some of the first planes humanity ever built.

Another building housed WWI military machines. Yet another hall contained helicopters. Finally one hall was dedicated to space exploration. Here they displayed a plethora of satellites hanging from the ceiling, lots of sounding and ballistic rockets and capsules. Awesome!

Unfortunately my images from inside the museum turned out kind of blurry. Also, I heard some museums don’t like visitors taking pictures, so let this be the only image from the inside. Let’s compensate that by letting me tell you; if you have the chance to go to Paris, check out the Musée de l’Aire et de l’Espace! It’s worth it!

Finally I made my way to the yard outside, where they displayed a dozen or so airplanes and – two full sized Ariane rockets! I had hoped that being able to walk up to one of these behemoths would allow me to fully grasp their scale. However, I have learned that my brain just isn’t capable of making sense of those dimensions.

Speaking of tourists taking touristy pictures…

Taking this image was problematic, as I had to position my phone at quite some distance to the vehicle in order to get it in frame fully, but at the same time my phones camera only had a ten second timer, so I had to sprint to get in position in time.

After drooling over the rockets for a while I got lunch at the museums restaurant and then checked out the other exhibition halls. One of them housed two Concorde hyper-sonic planes. You could enter them and visit the inside. What a marvel of flawed technology.

After watching some more war planes and inspecting some posters of military recruitment propaganda, I ended the day by taking the rocket selfie above and then left for the hostel once again. This time I bought a ticket before boarding the train and half an hour later I was at the hostel. There I sat down in the community area to write the last blog post, as well as parts of this one.

Later I returned to get a pint of beer to let my last night in Paris come to a close. Now its time to go to bed. Tomorrow I will return to Germany.

Europe Trip Journal – Entry 28: The Above and the Beneath

When I visited the roof top lounge of my hostel the day before, I had seen a place I had not yet been to and that a tourist in Paris probably should pay a visit to: The hill Montmartre with the Basilique du Sacré Cœur on top.

Therefore, the next day after getting up, I went down to the hostels breakfast area to get ready for the day. I was a bit late today, it was already 9:50, so the lady at the reception gave me a free breakfast stamp. I still got the full breakfast and did not have to hurry at all, so for me it was a pure win. After the breakfast I left and went to the nearest Metro Station. Some switches later I departed at station Anvers, which is very close to the Sacré Cœur. I could actually already see it from here.

The basilica is a popular attraction for tourists as from its elevated position tourists have a spectacular panorama view all over Paris. I climbed the stairs that lead to the top of the hill and was greeted by pitchmen trying to sell water bottles, small metal Eiffel Towers and other touristy gimmicks. Some of them also sold heart-formed love locks which couples could attach to fences around the church.

Paris truly is a MASSIVELY romantic city

You can visit the Sacré Cœur. It is possible to both go inside (its free), as well as take some 300 or so steps to climb the top. First I went inside the church. A churchmen quickly signaled me to take of my hat. I was a bit disoriented by this claim of respect, but of course being the guest I followed his request.

In retrospect I realize that during my trip I have made lots of photos of churches and cathedrals, even though I’m not a religious person. I believe that many problems of society are directly or indirectly caused by religious fanaticism, ideology and authoritarianism, hence I’m not a big fan. My philosophy is that if you need someone else – like a book – to tell you how to be a good human, you are not a good human. However, you cannot NOT be in impressed by the massive scales of cathedrals built hundreds of years ago. These architectural masterpieces stand testament to the fact that religion can also excel humanity, even though it may only be a side effect of exercising power.

Next I got in line to visit the roof top. There was a sign saying that it would take 292 or so steps to get to the top, so I decided to count. Two thirds the way up, debating in my head whether a downwards step should count as -1, 0 or 1 (what do you think?), I lost track and gave up on counting. On the top was a panoramic gallery which provided a wide view over the city.

From up here, I could clearly make out where train tracks were cutting a swathe through the jungle of buildings that made up the city. Somehow the metaphor of a “Gleiserne Wunde aus Stahl” kept floating around in my mind.

On the way down again, I could make another art piece for my collection of photos of tourists making touristy photos.

Out of the basilica again, I wandered a bit through the streets of Montmartre, just letting the experience rain down on me. There were lots of artists that offered drawing pencil portraits of people. Everyone of them had a collection of drawings on display and most of them looked the same. A bit lost in thoughts on whether you can separate an artist from their work or not, I stumbled across a small restaurant that offered pizza with fondue cheese. Intrigued I decided to give it a try as I was a bit hungry too.

In the end I cannot recommend it unfortunately, as the pizza was actually two halves of a Baguette with some toppings and a layer of half molten fondue cheese on top. Given that it was quite cold that day and I was sitting in the shadows, it did not long for my meal to get cold. I’m not sure what the definition of pizza truly is, but all I can say is that pizza let me down for a second time during my trip.

After this disappointment I decided to get to the next metro station. Metro stations in Paris look funky by the way. Not all entrances do look the same, but there are some that are designed to mimic plants in nature. In my opinion they look rather spooky and remind me of Tim Burton movies.

I decided to explore the metro network of Paris for a change. On my way from the train station to the hostel I had come across a station that had been stuck in my head for some time. Station Arts et Métiers has quite some steampunk vibes. To get there, I had to switch lines once, but then I exited at the station.

This must have been Jules Verne’s favorite metro station! There were signs pointing to the exit that were labelled with “Musée de Arts et Métiers”. Oh, interesting, a museum dedicated to crafts and technology. I followed the signs upstairs and found the museum. Unfortunately it was closed on Mondays. A bit disappointed I decided to maybe check it out the next day.

It got really chilly, so I took the metro back to my hostel. There I remembered that someone had recommended me to pay a visit to the Mozilla offices in Paris. I searched for it on OpenStreetMaps and it actually was super close to my place. So I left the hostel again and 10 minutes later I stood in front of the building. From the outside there was not a single sign that this was Mozilla’s office. I tried one door, but it was locked. In another part of the building I found a door that opened up to a hall with a small reception desk and some guards.

Asking whether I could visit Mozilla turned out a bit complicated, as the guards only could speak very little English and I only very little French. Luckily there was an electrician who could translate. A bit of confusion later one of the guards offered to escort me to the office. Apparently Mozilla does not have regular visitors, as the guard did not know where the office was either. It turned out he spoke German however, so at least I could explain my endeavor a bit better now.

After not finding any signs of Mozilla in the first half of the building, we went to the door that I had tried before and the guard let me in. We drove the elevator up and voila, there were Mozilla signs on the walls. However, unfortunately nobody answered our ringing (it was probably already after closing time) and there was a sign that stated that no non-essential visitors were allowed during the pandemic. So we left the building again and I thanked the guards for their efforts.

The Mozilla wiki said that you could also message Mozilla staff in an IRC channel, however they recently transitioned to matrix and apparently did not yet update the wiki page. I briefly tried to search for a chat room related to the Paris office, but my matrix server kept timing out. Oh you brave, shiny, new and terribly inefficient technology keep to amaze me every time 😉

Back in the hostel I watched some videos and tried to kill some time. My plan was to visit the Eiffel Tower at night to check out the famous light installation. So when the sun started to go down, I once again left for the metro. Arriving at the Eiffel Tower I searched for a grocery store to get some affordable beers. My first address was already closed, while the next closest store was in the process to do so. I quickly hopped in, grabbed two beers without second thought and went to the checkout. Then I walked back to the Eiffel Tower.

During my first visit I had only checked out the summit and then left by crossing the Seine, so I had not seen the large grass area which people used to sit on. This time I wanted to sit down there to get the full experience. I found a nice spot with a hedge to lean against and sat down.

On the other side of the hedge were 2 violinists that played to some pop culture songs from a loudspeaker. I wish they had not. At least one of them was probably very new to playing the violin and was constantly at least 2 full notes off. It sounded horrible, bit I also enjoyed listening to them for some strange reason. It was also interesting to see that some people also unironically seemed to enjoy their playing.

And then the Eiffel Tower started glowing. It was a warm orange-ish light and looked quite pretty against the dawning sky.

As you can see on my grainy face, my camera did its best to lighten up the photo, so just imagine the sky being more of a dark blue :D.

I had heard stories of it being illegal to photograph the light installation of the Eiffel Tower due to copyright issues, but apparently this is only true for professionals. So I guess to be on the safe side of things I need to make some kind of very inappropriate joke now to disqualify from this category.

Police in Paris have finally caught the elusive mime known for masturbating in public and harassing tourists.
In a statement, Police Chief claims “he came quietly”.

I’m sorry. Joke from upjoke.com

And so I sat there, amongst a crowd of other tourists, drinking random French beer, listening to off-key violins and watching the Eiffel Tower. This was my second last night in Paris. It was only now that I realized that this adventure would be over soon. In some sense I was looking forward to get home again, being able to sleep in my own bed again. Not having to worry about sleepless nights due to loud snoring. But this will surely not be my last travel adventure, so I’m not sad that this trip will come to an end.

At some point the light show on the tower changed to a twinkling sparkle, accompanied by an outcry of awe from the crowd. I enjoyed the spectacle for a few more minutes and then went home to the hostel again. And so another day in Paris came to an end.

Monday, 30 May 2022

Europe Trip Journal – Entry 27: Oh Champs-Élysées

Yesterday I had some breakfast in my hostel and after that I took the metro to the Châtelet station which is close to the Seine. When I was in Paris near the start of my trip (which is almost 4 weeks ago at this point) I had walked down the riverside path of the Seine and therefore had missed some points of interest that lay behind the quay wall.

One of these things is the famous Louvre. I did not plan on visiting the inside, as I’m not really into looking at paintings, so I settled on sightseeing the building from the outside. The campus (I’m not actually sure if it all belonged to the Louvre) is huge. You are encircled by large facades and are stared down from statues of famous people on the balconies above. Then, walking though an archway, you enter a large yard with the glass pyramids in the middle.

I still have an idea for some future arts project in case there are artistic people among those who follow my journey so far: Taking pictures of tourists taking pictures of themselves. At the Louvre’s glass pyramids they have placed some granite blocks for tourists to stand on and take quirky images of themselves holding the pyramid from its tip. As a consequence, every tourist takes the same exact image.

Guess what their photo looks like

Behind the pyramids is a park with lots of statues. Some of them show rather questionable motives, like this one of a woman who is clearly uncomfortable of being photographed.

Poor woman must have slipped just before the sculptor immortalized her misery

Imagine modelling for a sculptor and this is the result…

The park also had another water basin for children to play with toy sail boats. It was rather cold this day, I guess it was the coldest day of my journey so far. You can probably tell by the people in this image wearing jackets.

After the park came a big place with an obelisk in the center. As far as I know, this obelisk was a present from an Egypt emperor to a French one.

They even still have packing information on the bottom

Leading from the obelisk down to the Arc de Triomphe is the Avenue des Champs-Élysées. Given the extent to which this street is romanticized in culture, I would have expected it to be spectacular. However, it was a bit underwhelming, as it even took me some street signs to even notice that I was in fact walking down the Champs-Élysées. To me it looked like part shopping promenade, part park lane.

When I reached the Arc de Triomphe, I admit that I may have broken some traffic rules. The arc is standing in the middle of a large roundabout with multiple lanes. Apparently there are underpasses for pedestrians to reach the center without the need to cross the street. However, I did not know that, so I just crossed the traffic lanes when I got the opportunity. Only when I crossed traffic another time on the other side of the arc, I noticed that what I had thought was a metro station was in fact an entrance to the underpass. ¯\_(ツ)_/¯

At this point it became a bit windy, so I decided to get back to the hostel. I found an actual metro station not far away, which took about 15 minutes or so to get me close to the hostel. I must say, that I like the metro. Its a shame that Münster doesn’t have a metro system. Being able to just walk to a station and have the next train be there in less than 10 minutes is quite nice.

Back at the hostel I relaxed a bit in my room, then went shopping for some snacks and later went to the roof terrace. There I got some work done on my laptop. I considered getting a drink, but the prices were exorbitant, so I quickly discarded that plan.

And that already concludes yesterday. Originally I had planned to visit some ESA centers in Paris, but it turns out that they only contain offices and are not open to visitors :/. Someone recommended me to visit the Mozilla office though, so I might try to check that out in the next days.

Saturday, 28 May 2022

Europe Trip Journal – Entry 24 – 26: OpenPGP Email Summit

It’s been a while since the last blog post and a lot has happened. I will try to catch up, but I may not remember all the details. Nevertheless, here we go:

On Thursday morning I got up earlier to get some breakfast at the hostel this time. Last time I had missed it, but not this time. For a handful of euros the hostel was offering breakfast on an all you can eat basis. They had croissants, filtered coffee, orange and apple juice, yogurt and spreads like butter, jam and honey. I enjoyed it.

I briefly met A. again, who wished me good travel and then I left the hostel to get the train to Geneva. The ride through the alps was very impressive. I was looking out of the window most of the time, as the mountains appeared to grow higher and higher. Absurdly high. Every now and then the train made a stop at a small village and some people left the train while others got on. And then the train once more meandered through the green mountains of the french alps.

During the trip I suddenly remembered that Swiss was not part of the EU, so the roaming rules that apply to all EU countries and that allow EU citizens to use mobile internet and telephony abroad while not having to worry about astronomical bills from their provider would not count. I quickly did some research on how bad it would be.

7cts/10KB. What. The. Fuck. I quickly disabled roaming and mobile internet. Guess I’ll be dependent on the availability of Wifi for the next days.

After arriving in Geneva, I had to walk through customs but they did not check my luggage. That would have been annoying to unpack everything. My hotel (I hadn’t found a hostel, so I had opted for an Ibis budget hotel) was on the outskirts of the city, so I had to walk for about an hour, but I quickly found it. Still, the location of the hotel was a bit strange, as I had to pass the currently deserted Palexpo exposition halls and wide, empty parking lots just to get to the building which looked a bit like the staircase of a parking deck. But I had my own room, so hey 🙂 Oh, and they gave me a “free” ticket for public transport during my stay, so that’s nice.

After quickly refreshing, I took the bus to the other end of Geneva to the offices of Proton (formerly Protonmail). They hosted the 6. OpenPGP Email Summit which I was going to attend. When I got to the building, I had to call the office upstairs and one member of the Proton team came down to fetch me. On Thursday we only had an informal meeting of participants that already arrived. The real discussions would take place on Friday and Saturday, although when I entered the office room people already had discussions going.

After the meeting we went to a small bar to get some drinks and a small dinner. This being my first OpenPGP meetup (apart from the Sequoia meeting I was invited to some time ago), it was nice getting to know many of the people I already knew from the internet in person.

When later that evening people started heading back to their hotels I figured this was a good idea for me as well. When everybody was gone, I noticed that I actually didn’t know how to get to my hotel. Organic Maps unfortunately only has offline support for public transport for metros and trams built in, so I searched for a tram route which at least would take me a bit closer to my destination. Unfortunately it turned out that the tram was cancelled when I got to the station. I walked back to the last bus stop I had seen and asked the driver of the next bus, which line would go to the airport. I’m a bit proud of myself, because the driver could not speak English so I had to try in French and apparently it worked out. “Vingt-trois” was the answer.

Now I only had to find out from where the 23 would depart. I asked a lady at the bus stop and she told me which bus to take and where to switch. So at 00:20 I was back at the hotel.

The next morning I got up at 8:00 and without having breakfast I went to the Proton offices. Luckily it turned out they had croissants and coffee there. We had 2 presentations and afterwards everybody gathered potential topics to discuss. Then we voted on all candidates and picked the most popular one for 1h long sessions. There were 3 tracks with 3 sessions each.

The sessions were very productive. I learned some crazy facts about things that enterprise OpenPGP providers would have to deal with and got a good insight into the daily challenges of client developers and the future of the protocol. It will be exciting 🙂

After 8 long, exhausting hours it was time for dinner. We met an Italian restaurant and had some very nice food, beer and wine. I really enjoyed socializing with all these like-minded folks. At some point it was time again to get back to the hostel, but this time I asked someone from the gathering to quickly let me use their phone to find a route.

The night was not very restful. I had eaten too much, so my stomach was complaining a bit and took some time until it finally allowed me to sleep. The next morning I overslept for a few minutes and then had to pack my stuff, as I had to check out again.

After successful check-out I took the bus to the Proton offices again and we had another day of sessions. In the end we collected some actionable items and assigned people to work on those tasks. And then I said farewell and left for my train.

During my stay in Geneva I haven’t taken many photos, mainly because most of the time I was with other people. There was a gentleman’s agreement not to take photos of others, so that’s the reason why this post does not have any pictures.

My train to Paris was supposed to depart at 18:30. At the station I had to walk through customs again and they picked the person in front of me for a detailed control. I was let through though. When I got to the platform the sign said that the train was delayed for 25 minutes. Fine. Then suddenly the sign read that the train was delayed indefinitely. Not good. In the end the train had about 70 minutes delay, so it was 23:30 when I got off in Paris. 2 metro rides later I got to my hostel and checked in.

I want to say thank you to Proton for hosting the event and to everyone who attended the summit and contributed making it such a nice experience 🙂

Now I have to get some sleep. Good night 🙂

Wednesday, 25 May 2022

Europe Trip Journal – Entry 23: A Hike and a Hat

The last night I was constantly reassured of not being alone by the constant snoring of my room mates. Even with ear plugs in I could not overhear their nonverbal consolation. At some point I was even worried for the health of the person sleeping in the bed below me, as the rhythm of their breath was almost two times faster then mine. This couldn’t be healthy!

This was the reason why the next morning I only got up at 11:00. I took a shower, reusing my old shirt as replacement for a towel and then went down to get some breakfast. Unfortunately the hostel only offered breakfast until 10:30, so I had to opt for a cappuccino and a brownie instead. It still tasted nice, so I did not mind too much.

After finishing the brownie (I had downed the cappuccino with a pace that surprised myself), I went back to my bed for an hour or so but then finally it was time to go out. I had not yet seen much of Lyon before, so it was time to change that.

The bridge was shaking when a jogger passed by – also, is that the Eiffel Tower in the distance???

On Wiki Voyage I read that there was a historic part of the city which dated back to the year 1400 or so, so I thought this would be a good place to start. I located one of the historic quarters and started walking there. When I arrived though, I was quite disappointed. Nothing here looked like it could be from the middle ages. So I ditched that plan and simply chose the most interesting looking street to follow down.

On a hill I could make up some church-like building throning over the city. Surely being up there would grant a really nice overview over Lyon! The pass upwards was steep and exhausting so when I finally reached the top after following some serpentines and climbing some stairs, I needed a short pause. In a restaurant I bought a vegetarian salad and a coke and then sat down outside in the sun.

View over the city

Luckily I had taken all the wrong turns on my way up, so the way down was pretty straight forward. I could even take a shortcut, following some very long stairs down which even had a street name attached to them.

After passing some more impressive churches and huge buildings, I found myself in a part of the city which was strangely modern, yet old fashioned. I’m struggling a bit to find the right words to describe quite the feeling I had while walking through the streets and places here, but let me try nonetheless.

This part of the city looked like from sometime in the middle of the 20th century. The houses where white, and there was a large, open places with a splendid fountain in the middle. I suddenly had the strange feeling of living in a world that stood for ideals. This is what a person in the golden age in the 1920s must have felt like, filled with optimism for a brave new world of reasoning and peace. A world of progress, which values achievements and advancement over despotism and conservatism, science over religion. I don’t know why I felt that way, or what made me feel so, but nevertheless it did.

Wandering though a passage way, I came across a store that sold French hats. Since this was expected to be my last stay in France on this tour, it was now or never, so I entered the shop and ended up buying a hat. Surely, wearing this hat I would perfectly blend in with the local population. Later I learned that my hat was from Italy :P. I normally don’t enjoy buying clothing, and a hat is something I would have never really identified myself with before. Still, for some reason this hat made me very happy as it perfectly captured and somehow embodied what I felt in this moment.

Look at my amazing hat!

Back at the hostel, I decided to not get lunch at a restaurant today, but instead use the hostels guest kitchen to cook my own meal. On OpenStreetMaps I located a convenience store only about 5 minutes away. Proudly sporting my new hat I bought some pasta which I could dose into a paper bag to avoid leftovers, as well as a bottle of beer and an avocado. I also needed some cheese for the pasta, but did not want to get a whole block of cheese since that would be too much for me. So I got innovative and bought some easy to dose Babybel cheese.

Back at the hostel, the pasta was quickly done. Unfortunately the Babybel turned out hard to grate, so I just cut it down and added it to the pasta. Meanwhile I got to into a conversation with a Frenchman called A., with whom I talked about Astérix, the history of France and Germany, economics as well as the war in Ukraine. We had dinner together and he later wished me a good travel.

Tuesday, 24 May 2022

Europe Trip Journal – Entry 22: Drowsy and Annoyed

Today was a rather slow day. I got up at 8:20 to get breakfast at the hostel. This time they had bread \o/. I was the only one eating there, everybody else (there was a group of elderly people staying at the place too) must have gotten theirs already. At some point C. came down to get her breakfast too. She however decided to eat in the garden.

After I finished my meal, I went back to my room to lay down on the bed for an hour or so, before it was time to pack my bag. After checkout at 11:00, I said farewell to C. and headed to the metro station. Half way there I noticed that I forgot my leftover beer bottles at the hostels fridge, but I decided not to go back. May someone else enjoy a drink on my behalf 🙂

I had 3:30h left before I would take the train, so how to best kill spend that time? In the metro station I studied the network map and decided that I wanted to check out the port near the city center, which I had not visited yet.

When I walked up the stairs at the port station (I forgot the name unfortunately), immediately the smell of fish got into my nose. Et voila, directly next to the metro entrance was a big stand displaying dead fish.

More interesting than the fish I found a pharmacy sign at the other side of the bay. Having no highspeed internet left on my phone (come on, they say they throttle the connection down to lower speeds, but in reality they suffocate the phones connectivity! I would not mind not being able to watch videos while not on WiFi, but at least text-only pages should still work! Right now, not even translator pages would load without timing out. Someone needs to sue! Rant over) I was unable to search for the French word for coughing, so I just winged it and asked the pharmacist if she would be able to speak English. Luckily she was, so I was able to get some medicine. Hopefully my coughs are gone until the end of the week.

With this out of the way, I wandered around without a plan for some time and eventually decided to go to the local Burger King in hope for some WiFi. At this point I was annoyed at the waiting time and just wished that time was running faster. After finishing a vegan whopper and an ice tea it was still about 2h left, but I had enough of the port area so I took the metro to the station nonetheless.

When I entered the station, my hand searched for my ticket but I could not find it. I searched my other pocket but no trace of the ticket. Damn. Annoyed I was about to by another ticket when I saw one laying on the access control gates. Was that mine? I took it and placed it on the sensor and sure enough, the gate opened. Kind French people 🙂

My wait at the train station was unspectacular and just as annoying. I sat down on a bench and listened to music. I was really tired so I almost fell asleep again and again. Time still seemed to crawl so it was like an eternity until finally the schedule display showed the platform on which my train was waiting. The train ride was the same deal. I just wanted to get to Lyon and so I listened to music and tried not to fall asleep.

In Lyon I took the metro. I had to switch 2 times, the third time being just for one more station. The last metro train was waiting at a station that was quite aggressively tilted. So far even, that between the tracks there was a toothed rack, so apparently this was a cog railway! I’ve never taken one. A few minutes after, the train departed and rolled into the deep black of the tunnel. After a rather steep decent it eventually transitioned into the horizontal. Then it felt like the train was aggressively accelerating as I was pushed to the back. However, there was no engine howl or anything, the sound of the engine stayed the same. Finally I realized that the train was on a super steep ascent. And then it stopped at the station. After getting off the train, I had to take a picture of the station:

Note, that the camera is held perfectly horizontally

The hostel was just a few meters away, so finally I could check-in and get some more rest in my bed. What an annoying day.

In the evening I did some work in the community area of the hostel. There is a new revision of the Stateless OpenPGP Protocol specification which adds support for password-protected keys. I got to work implementing these changes in sop-java, which is the generic interface library used by pgpainless-cli and pgpeasy.

Another guest sat down next to me with a jute bag displaying a “there is no cloud” logo. He pulled out a Thinkpad covered in stickers just like mine. I noted that these were some nice stickers and we got to talk for a short while. He asked me if I was attending MixIT, an IT conference which was apparently happening in Lyon just now. I denied and he told me the conference was held exclusively in French anyways. Still, would have been nice to combine my stay here with a visit at a hacker conference 😀

Now its time for me to go to bed. Hopefully the medicine I got will work and I wont keep everybody in my room awake.

Monday, 23 May 2022

Akademy 2022 Call for Participation is open

The Call for Participation for Akademy is officially opened!

...and closes relatively shortly! Sunday the 12th of June 2022

You can find more information and summit your talk abstract here: https://akademy.kde.org/2022/cfp

If you have any questions or would like to speak to the organizers, please contact akademy-team@kde.org

Europe Trip Journal – Entry 21: Ferris Wheels and Train Tickets

This morning I got up at 8:00 which is an hour earlier than during the rest of the trip – and lets not talk about my usual sleep schedule. The reason was that breakfast was served from 7:00 to 9:00 in the hostel. J. and me arrived at 8:15 or so and to my disappointment there was no bread left. As a replacement they served Zwieback onto which I applied a thick layer of jam. After the breakfast it was time to say farewell to J. as he had to catch his train.

My plans for the day were comprised of going to the beach. That’s it. I had located a strip of sand on OpenStreetMaps earlier and used a water fountain as wayoint for Organic Maps. The app suggested taking the metro, so I walked out of the hostel in the direction of the station. On the way I crossed a park with fitness devices. How nice is that? People here can go for a run and have free access to public sports devices!

The metro station was massive. I turned a corner, expecting the usual stairs down into a rabbit hole where you need to mind your head to prevent injury, but no. This station was a whole another deal. Four long stairs lead down to what looked like a bunker entrance, several tens of meters under floor level. Across from the stairs I was taking was another set of stair leading up again, and also some escalators.

Again, the image doesn’t do justice to the massive scales of the station

Down in the station there was another set of stair that finally lead down to the platform. I found it fascinating that the metro in Marseille has rubber tires! I guess that is so it can provide those high acceleration values?

I had to switch metro once to get to my destination and from there I had to walk for about 20 minutes to reach the beach. The spot I had chosen was a gravel beach, but the individual pebbles were all rounded down smooth and actually felt nice to walk and sit on. I relaxed a bit after which it was time to check out the water.

It was cold.

After only 2 minutes or so I had enough of swimming and went back onto the beach to lay down and re-heat using the power of the sun. a few hours later I grabbed my stuff and walked down the beach as I had seen a Ferris Wheel in the distance.

The guy in the ticket booth was asleep and it took me several attempts to wake him up. Finally a spirited “Bonjour?” through the small opening in the plexiglass woke him up so I could buy my ticket. The view was not as spectacular as for example the ride with the cable car in Barcelona, but I could get a good overview nonetheless.

After 2 revolutions of the Ferris Wheel I decided to get back to the hostel, so I had to walk all the way back to the metro station and take the same route back. After yesterdays miserable attempt at buying groceries on a Sunday, today I had another try. I found a shop which unfortunately was very overpriced, but I still got some beers, yogurts, chips and a chocolate bar.

Then it was time for dinner, followed by an hour of work. I added some tests to my pull request against Bouncy Castle which I had reported on in an earlier post, and then spontaneously decided to attend a OpenPGP meeting which will take place at the end of the week in Geneva. For that reason I had to cancel my plans to go to Italy next and instead book train tickets and hostels for a slightly different route. I guess I will visit Italy another time :). Oh boy, Geneva is expensive!

And now I am sitting here, alone in the dining hall, writing this post. This is the only place in the hostel with the tri-factor of okay-ish WiFi coverage, a wall plug and comfortable seating. Unfortunately my coughs are back and they are super annoying. Thanks to C. for lending me her mug so I can sip some tea which hopefully helps a bit. I hadn’t managed to get to a pharmacy today, but I will try again tomorrow.

Europe Trip Journal – Entry 19 – 20: Tranquility

My second day in Nimes was unspectacular. After getting up eventually, I went into the city to get some breakfast. I settled on a salad in a restaurant close to the Arena. When I finished the salad, I wandered around in the city.

You could visit the Arena, however I was a bit confused by mixed messages. Some places stated that entry was free, but the sign at the entrance prompted to buy a ticket for 10€. There were also guided tours for a higher price, so in the end I decided not to visit the Arena from the inside. Part of my reasoning was also that the place was undergoing some maintenance currently, so not all parts could be visited.

A few streets further, I reached a big temple. Unfortunately it was closed right now, so I could only take some pictures from the outside.

It was pretty hot with the sun standing high up in the sky, so soon I took refuge in a supermarket. Here I bought some sweets, a liter of milk as well as a bottle of blue Fanta which I had never tried before. I also wanted to get some breakfast for the next day, so I decided to get some cereals. However I would not want to take the opened pack of cereals with me later, so instead I was glad to find a “cereal station” where I could fill my desired small amount of cereals into a paper bag.

Back at the hostel I was exhausted. It was only around 14:00 or so, maybe earlier, but I was already drenched in sweat. I decided to lay in bed to wait until the bulk of the heat was gone. In the end I ended up laying there for 4 hours or so, so it was early evening when I was ready to go for another walk.

I had eaten too many of the sweets I had bought earlier, so I had a sugar-induced headache. It was time for a real meal to dilute the sugar concentration down. I stopped at a restaurant next to the one where I had gotten the Camembert Fodue the day before and ordered a vegetarian burger. It came with fries, a small salad and I also got a local beer of Nimes with it. Nice.

Finally I went to bed after a day that felt a bit wasted, but hey, I’m on vacation!

The next morning I got up and went down to the kitchen to get some cereals for breakfast. When I later left the hostel, I forgot to take the half liter of leftover milk with me, so in case you are staying in the hostel near the train station in Nimes, feel free to take my Milk (its labeled “Paul” ;)).

My train had 15 minutes of delay, so I had to wait a bit at the train station. The ride was uneventful and quick (only about an hour). Arriving in Marseille, as usual my first destination was the hostel. This time the place I had booked was the farthest away from the station as it had ever before. I had to walk about 5km through the heat of noon. I rarely met any locals, I guess those were clever enough to avoid the heat.

Marseille is mountainous and my hostel was located at the top of a hill. After 1 and a half hours I finally reached the place, however they would only open in about another hour from now. I found a bench which I sat down on and relaxed a bit. From the outside the hostel looked a bit like a haunted mansion. I couldn’t make out any signs of life. Did I book at a closed place by mistake?

Will I ever escape this haunted house?

Eventually some people appeared on the premise, so I got up and after searching the entry for a while (I had to ring for them to open a gate), I went to the reception. The inside of the building was not what I had expected. It looked like a big mansion, with an atrium (I’m not sure if that’s how its called) and lots of intricate decorations and a mosaic floor.

Quo Vadis!

What a fascinating place. Outside was a big garden with some tables and a very nice view over the city. I heard that they also offered sleeping in tents.

Earlier that day I had noticed that I was running out of clean clothes, so it was time for laundry day again. I asked in the reception, if it was possible to do the laundry here and they said that there was a machine in the garden. I must have made a confused face, as the receptionist quickly followed with “I will show you”. She guided me to a small shed in the garden where there were two washing machines. I put my stuff into one of them and then asked where to dry the clothes afterwards, to which she replied that there were clotheslines behind the shed.

In my room I met J., a traveler from the US. We exchanged some few words, but he seamed a bit tired, so I decided not to bother him too much.

As I was a bit hungry, I decided to buy some groceries next. Some chips and some yogurts would be nice, along with two or three bananas. On OpenStreetMaps I located a Carrefour some 1.5km away and so I grabbed a small bag and started walking. It was only when I stood before closed doors that I remembered it was Sunday. On the way back I saw a sign for a pizza place, so I wasn’t out of options.

Back at the hostel my laundry was soon finished so I got it out of the machine and went behind the shed where there was a small maze of clotheslines. One was free so I put my stuff on a little plastic table and started hanging my stuff. Usually I dislike doing the laundry, as I deem it to be a very mundane, tedious task, but today it was different.

The sun was slowly starting to set and was sending its warm, friendly beams my way. Birds were singing and the tall grass was ticking my legs. A pleasant warm summer breeze was carrying the distant, muffled sounds of the city which was slowing down after the day. It was a summer evening in France. Suddenly all the exciting and all the wrong that was going on in the world faded into the background. For a short moment priorities shifted and the world seemed to center around me and my laundry. It was a perfectly peaceful moment of tranquility and I enjoyed it.

Later I decided to get a pizza from the place I had seen earlier. I got a tomato-mozzarella pizza with olives. The place had no tables as it was primarily focusing on delivery services, but luckily I was able to order in situ. After only 10 minutes or so my pizza was ready and I walked back to the hostel.

In the garden I met C., a woman who turned out to be from Germany as well. Since everyone at the hostel was about to get dinner, I decided to join C. and J-L., a French guy with a Star-Trek’y first name, at their table. C. explained that she wanted to move to Marseille at some point and that she was currently looking for potential places. Soon J. also joined us. C. noted that J. and I would look like brothers and we soon found out that there were some staggering similarities between us. Like me, J. was traveling for 3 weeks already and he also wanted to go further south-east next until ending his journey at the end of the month. He was also studying computer science like I did and was also currently making a living as a software developer, although he was doing front-end web development.

We spent the evening talking about all sorts of stuff in a vivid mix of French, English and German and it was so nice to forget that we were from different countries and just be people. J. had learned French solely through an app, which is quite impressive given how fluid he was able to communicate. Although I had learned French for 3 years in school I had forgotten most of it, so I had some trouble following the French parts. It steadily improved though as I remembered more and more of the vocabulary.

Then it was time to go to bed. I wasn’t feeling febrile anymore and my sore throat had mostly stopped to hurt, but now in the evenings I had the strong urge to cough. Since I didn’t want to wake up my room mates I tried to suppress it, but a super uncomfortable itch was making it hard for me to breath without coughing. It was so bad that it was shaking my whole body sporadically. In order to not wake up the others I spent some time on the toilet until eventually I was able to fall asleep in my bed without any more coughs. I’m not sure if this is a regular cold or what it is, but I will ask for some medicine in a local pharmacy today.

Saturday, 14 May 2022

The KDE Qt5 Patch Collection has been rebased on top of Qt 5.15.4

Commit: https://invent.kde.org/qt/qt/qt5/-/commit/5c85338da3c272587c0ec804c7565db57729fd48

 

Commercial release announcement: https://www.qt.io/blog/commercial-lts-qt-5.15.4-released 


OpenSource release announcement: https://lists.qt-project.org/pipermail/development/2022-May/042437.html

 

I want to personally extend my gratitude to the Commercial users of Qt for beta testing Qt 5.15.4 for the rest of us.

 

The Commercial Qt 5.15.4 release introduced some bugs that have later been fixed. Thanks to that, our Patchset Collection has been able to incorporate the reverts for those two bugs that affected Android and Windows and the Free Software users will never be affected by those!



Thursday, 12 May 2022

DevOps inspiration from Toyota Production System and Lean considered harmful

Note: This text was originally the synopsis for a much longer article which I intended to write as the followup to a lightning talk about the subject I did at my workplace. Acknowledging that I probably won’t get time to write the long version, I think this synopsis can stand pretty well on its own as a statement of intent.

DevOps and DevOps-related practices has become a huge thing in the software industry. Elements of this, such as Continuous Integration and Continuous Deployment and the focus on monitoring production systems and metrics has resulted in large improvements in the handling of large-scale deployments. Especially, the act of deployment to production, in traditional systems often an error-prone process riddled with cataclysmic pitfalls and requiring huge amounts of overtime, is reduced to the trivial pushing of a button which can easily be done in normal office hours.

While the success of DevOps largely rests on technological improvements (containerization, orchestration systems, ease of scaling with cloud technologies) as well as process improvements originating in the Agile methodologies as they have developed since 2001 (with concepts such as pair programming, Test Driven Development and a general focus on automatization), much of the literature on DevOps contain a strong “ideological”, to the point of evangelization, promotion of the underlying philosophies of Lean production and management systems. One very conspicuous feature of this ideology is the canonization of Japanese management methods in general and the Toyota Production System (TPS) in particular as an epitome of thoughtful and benign innovation, empowering workers by incorporating their suggestions, achieving world-class production quality while simultaneously showing the maximum respect for each and every one of the humans involved.

This method (the TPS) was, the story goes, introduced in Western manufacturing and later in management, where its basic principles – improvement circles (kaizen), value stream mapping, Kanban, etc. has streamlined the basic business processes, improved productivity and reduced costs. Now, the narrative continues, DevOps will apply these same Lean lessons in the software industry, and we can expect similar vast improvements of productivity.

It is problematic, however, to try to “learn from Toyota” and from Lean Manufacturing without examining in detail how these work in practice, not least how they affect the people actually working in those systems. The authors behind some of the more popular DevOps introductions – The DevOps Handbook and the novels “The Phoenix Project” and “The Unicorn Project” – do not seem to have actually studied the implications of working under the TPS for Toyota’s Japanese employees in great detail, if at all, and seem to have all of their knowledge of the system from American management literature such as James Womack et al’s “The Machine that Changed the World”, basing their own Lean philosophies entirely on Toyota’s own public relations-oriented descriptions of their system.

This is problematic, since it overlooks the distinction between Toyota’s corporate representation of the intention of their production system – and the actual reality felt by automobile workers on the shop floors. Darius Mehri, who worked at Toyota as a computer simulation engineer for three years, has pointed out that the Western management movements inspired by Toyota have failed to understand a very fundamental distinction in Japanese culture and communication: The distinction between tatemae (that which you are supposed to feel or do) and honne (that which you really feel and do). Mehri posits that all Western proponents of The Toyota Way fail to realize that what they are describing is really the tatemae, what management will tell you and what workers will tell you in a formal context when their words might come back and harm them – while the honne is much grittier, much darker and much more cynical.

In effect, proponents of Lean manufacturing and management styles have imported a kind of double-speak in a Japanese variant, but similar to the all too well-known difference between corporate communcations and what workers will confide in private. By doing so, they have inherited the fundamental lie that the priorities of the TPS are respect for each individual employee, partnership between management and workers, and involvement of each and every employee in the continuous improvement of the workplace; while its true priorities are a maximization of profit through the imposition of frenetic work speeds and very long working hours, discarding workers afflicted by the inevitable accidents and work-related diseases – and an “innovation” mainly driven by imitation of other manufacturers.

The truth about the very Toyota Production System that inspired the Lean movement is, leaving the tatemae aside and looking at the honne, that these factories are driven unusually ruthlessly, with little or no regard for the human costs for the workers on the shop floor. Meetings, security briefings and announcements are routinely made after or before actual working hours, when workers are on their own time. Assembly lines are run at extreme speed in order to increase productivity, resulting in serious accidents, chronic work-related diseases as well as production defects. Even so, production targets are set unrealistically high, and the shop crews are not allowed to go home before they are met, often resulting in several hours of daily overtime. The “improvement circles” do exist and workers are indeed asked to contribute, but the end goal is always to increase production and increase line speed, never to create more humane working conditions on the shop floor. Such improvements are (if at all) introduced more grudgingly, e.g. as a consequence of labor shortages and worker dropout.

Lean, by lauding the TPS and uncritically buying its tatemae, is introducing a similar honne of its own: It is, in reality, not revolutionizing productivity, and for all its fair words does not promote the respect of each worker as an individual. On the contrary, the relentless focus on constant “improvements” and constant demand that each employee rationalizes their work as much as possible has caused it to become known as “management by stress”. It may indeed focus on metrics and may indeed choose metrics to demonstrate its own success – while achieving results that range from average/no change to absolutely dismal.

Proponents of DevOps should stop presenting Toyota as any kind of ideal way of working – literally, a nightmarish grind with workers forced to do ten- or eleven hour shifts, ignoring accidents, running beside old and worn-out machinery in outrageously dangerous conditions is not where we want to go. And the “ideal Toyota” with its “improvement kata” and “mutual respect” never existed except as the tatemae to the cynical honne of shop-floor reality. By importing the tatemae as though it were Truth itself, the Lean movement has imported its double-speak – Lean or “management by stress” transitions can be very unpleasant indeed for employees, and while everything is shrouded in talk of partnership and mutual respect, the underlying motivation will often be money-saving through layoffs – the honne to the Lean management bullshit’s tatemae.

That is to say: Perpetuating the lie about Toyota as a humane, innovative and respectful workplace is positively harmful to the employees and processes afflicted by the proposed improvements, as the double-speak involved will inevitably rub off. The Toyota tatemae was not, after all, designed to be practised literally. Accepting it at face value will only set us up for further double-speak in our own practice.

While the software industry can and should continue to evolve based on the philosophy enshrined in the Agile Manifesto and the improved work processes introduced by DevOps, we should eschew the mendacious narrative of Happy Toyota and reject the Lean philosophies that it founded.

REFERENCES

Heather Barney and Sheila Nataraj Kirby: Toyota Production System/Lean Manufacturing in “Organizational Improvement and Accountability: Lessons for Education from Other Sectors”, RAND Corporation 2004 (online: https://www.jstor.org/stable/10.7249/mg136wfhf.9).

Ian Hampson: Lean Production and the Toyota Production System – Or, the Case of the Forgotten Production Concepts, Economic and Industrial Democracy & 1999 (SAGE, London, Thousand Oaks and New Delhi), Vol. 20: 369-391 (online: https://library.fes.de/libalt/journals/swetsfulltext/6224179.pdf).

Jeffry S. Babb, Jacob Nørbjerg, David J. Yates, Leslie J. Waguespack: The Empire Strikes Back: The End of Agile as we Know it?, paper given at The 40th Information Systems Research Seminar in Scandinavia: IRIS 2017 – Halden, Norway, August 6-9, 2017 (online: https://research-api.cbs.dk/ws/portalfiles/portal/58521158/IRIS_2017_critical_170501_submission.pdf)

Darius Mehri: The Darker Side of Lean: An Insider’s Perspective on the Realities of the Toyota Production System, Academy of Management Perspectives 20, 2, 2006 (online: https://www.jstor.org/stable/4166230)

Stuart D. Green: The Dark Side of Lean Construction: Exploitation and Ideology, proceedings IGLC-7, 1999, 21-32 (online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.323&rep=rep1&type=pdf)

Satoshi Kamata: Japan in the Passing Lane: : An Insider’s Account of Life in a Japanese Auto Factory, Pantheon Books, New York (1982)

Gregory A. Howell and Glenn Ballard: Bringing Light to the Dark Side of Lean Construction: A Response to Stuart Green, proceedings IGLC-7, 1999, 33-38 (online: https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=203907F7926472DB31BBE75D290A826B?doi=10.1.1.418.4301&rep=rep1&type=pdf)

Will Johnson: Lean Production – inside the real war on public education, Jacobin Magazine, December 2012 (online: https://www.jacobinmag.com/2012/09/lean-production-whats-really-hurting-public-education/)

Mike Parker: Management-By-Stress, Catalyst Magazine 1, 2, 2017 (online: https://catalyst-journal.com/2017/11/management-by-stress-parker)

Gene Kim, Jez Humble, Patrick Debois and John Willis: The DevOps Handbook, IT Revolution Press, Portland (OR) 2016.

Gene Kim, Kevin Behr and George Spafford: The Phoenix Project, IT Revolution Press, Portland (OR) 2015.

Gene Kim: The Unicorn Project, IT Revolution Press, Portland (OR) 2019.

Phil Ledbetter: Why Do So Many Lean Efforts Fail?, https://www.industryweek.com/operations/continuous-improvement/article/21144299/why-do-so-many-lean-efforts-fail, 20/9-2020.

Enid Mumford: “Sociotechnical Design: An Unfulfilled Promise or a Future Opportunity”, https://link.springer.com/content/pdf/10.1007/978-0-387-35505-4_3.pdf

Sunday, 08 May 2022

On forms of apparent progress

Over the years, I have had a few things to say about technological change, churn, and the appearance of progress, a few of them touching on the evolution and development of the Python programming language. Some of my articles have probably seemed a bit outspoken, perhaps even unfair. It was somewhat reassuring, then, to encounter the reflections of a longstanding author of Python books and his use of rather stronger language than I think I ever used. It was also particularly reassuring because I apparently complain about things in far too general a way, not giving specific examples of phenomena for anything actionable to be done about them. So let us see whether we can emerge from the other end of this article in better shape than we are at this point in it.

Now, the longstanding author in question is none other than Mark Lutz whose books “Programming Python” and “Learning Python” must surely have been bestsellers for their publisher over the years. As someone who has, for many years, been teaching Python to a broad audience of newcomers to the language and to programming in general, his views overlap with mine about how Python has become increasingly incoherent and overly complicated, as its creators or stewards pursue some kind of agenda of supposed improvement without properly taking into account the needs of the broadest reaches of its user community. Instead, as with numerous Free Software projects, an unscrutable “vision” is used to impose change based on aesthetics and contemporary fashions, unrooted in functional need, by self-appointed authorities who often lack an awareness or understanding of historical precedent or genuine user need.

Such assertions are perhaps less kind to Python’s own developers than they should be. Those choosing to shoehorn new features into Python arguably have more sense of precedent than, say, the average desktop environment developer imitating Apple in what could uncharitably be described as an ongoing veiled audition for a job in Cupertino. Nevertheless, I feel that language developers would be rather more conservative if they only considered what teaching their language to newcomers entails or what effect their changes have on the people who have written code in their language. Am I being unfair? Let us read what Mr Lutz has to say on the matter:

The real problem with Python, of course, is that its evolution is driven by narcissism, not user feedback. That inevitably makes your programs beholden to ever-shifting whims and ever-hungry egos. Such dependencies might have been laughable in the past. In the age of Facebook, unfortunately, this paradigm permeates Python, open source, and the computer field. In fact, it extends well beyond all three; narcissism is a sign of our times.

You won’t find a shortage of similar sentiments on his running commentary of Python releases. Let us, then, take a look at some experiences and try to review such assertions. Maybe I am not being so unreasonable (or impractical) in my criticism after all!

Out in the Field

In a recent job, of which more might be written another time, Python was introduced to people more familiar with languages such as R (which comes across as a terrible language, but again, another time perhaps). It didn’t help that as part of that introduction, they were exposed to things like this:

    def method(self, arg: Dict[Something, SomethingElse]):
        return arg.items()

When newcomers are already having to digest new syntax, new concepts (classes and objects!), and why there is a “self” parameter, unnecessary ornamentation such as the type annotations included in the above, only increases the cognitive burden. It also doesn’t help to then say, “Oh, the type declarations are optional and Python doesn’t really check them, anyway!” What is the student supposed to do with that information? Many years ago now, Java was mocked for confronting its newcomers with boilerplate like this classic:

    public static void main(String[] args)

But exposing things that the student is then directed to ignore is simply doing precisely the same thing for which Java was criticised. Of course, in Python, the above method could simply have been written as follows:

    def method(self, arg):
        return arg.items()

Indeed, for the above method to be valid in the broadest sense, the only constraint on the nature of the “arg” parameter is that it offer an attribute called “items” that can be called with no arguments. By prescriptively imposing a limitation on “arg” as was done above, insisting that it be a dictionary, the method becomes less general and less usable. Moreover, the nature of Python itself is neglected or mischaracterised: the student might believe that only a certain type would be acceptable, just as one might suggest that the author of that code also fails to see that a range of different, conformant kinds of objects could be used with the method. Such practices discourage or conceal polymorphism and generic functionality at a point when the beginner’s mind should be opened to them.

As Mr Lutz puts such things in the context of a different feature introduced in Python 3.5:

To put that another way: unless you’re willing to try explaining a new feature to people learning the language, you just shouldn’t do it.

The tragedy is that Python in its essential form is a fairly intuitive and readable language. But as he also says in the specific context of type annotations:

Thrashing (and rudeness) aside, the larger problem with this proposal’s extensions is that many programmers will code them—either in imitation of other languages they already know, or in a misguided effort to prove that they are more clever than others. It happens. Over time, type declarations will start appearing commonly in examples on the web, and in Python’s own standard library. This has the effect of making a supposedly optional feature a mandatory topic for every Python programmer.

And I can certainly say from observation that in various professional cultures, including academia where my own recent observations were made, there is a persistent phenomenon where people demonstrate “best practice” to show that they as a software development practitioner (or, indeed, a practitioner of anything else related to the career in question) are aware of the latest developments, are able to communicate them to their less well-informed colleagues, and are presumably the ones who should be foremost in anyone’s consideration for any future hiring round or promotion. Unfortunately, this enthusiasm is not always tempered by considered reflection, either on the nature of the supposed innovation itself, or on the consequences its proliferation will have.

Perversely, such enthusiasm, provoked by the continual hustle for funding, positions, publications and reputation, risks causing a trail of broken programs, and yet at the same time, much is made of the need for software development to be done “properly” in academia, that people do research that is reproducible and whose computational elements are repeatable. It doesn’t help that those ambitions must also be squared with other apparent needs such as offering tools and services to others. And the need to offer such things in a robust and secure fashion sometimes has to coexist with the need to offer them in a convenient form, where appropriate. Taking all of these things into consideration is quite the headache.

A Positive Legacy

Amusingly, some have come to realise that Python’s best hope for reproducible research is precisely the thing that Python’s core developers have abandoned – Python 2.7 – and precisely because they have abandoned it. In an article about reproducing old, published results, albeit of a rather less than scientific nature, Nicholas Rougier sought to bring an old program back to life, aiming to find a way of obtaining or recovering the program’s sources, constructing an executable form of the program, and deploying and running that program on a suitable system. To run his old program, written for the Apple IIe microcomputer in Applesoft BASIC, required the use of emulators and, for complete authenticity, modern hardware expansions to transfer the software to floppy disks to run on an original Apple IIe machine.

And yet, the ability to revive and deploy a program developed 32 years earlier was possible thanks to the Apple machine’s status as a mature, well-understood platform with an enthusiastic community developing new projects and products. These initiatives were only able to offer such extensive support for a range of different “retrocomputing” activities because the platform has for a long time effectively been “frozen”. Contrasting such a static target with rapidly evolving modern programming languages and environments, Rougier concluded that “an advanced programming language that is guaranteed not to evolve anymore” would actually be a benefit for reproducible science, that few people use many of the new features of Python 3, and that Python 2.7 could equally be such a “highly fertile ground for development” that the proprietary Applesoft BASIC had proven to be for a whole community of developers and users.

Naturally, no language designer ever wants to be told that their work is finished. Lutz asserts that “a bloated system that is in a perpetual state of change will eventually be of more interest to its changers than its prospective users”, which is provocative but also rings true. CPython (the implementation of Python in the C programming language) has always had various technical deficiencies – the lack of proper multithreading, for instance – but its developers who also happen to be the language designers seem to prefer tweaking the language instead. Other languages have gained in popularity at Python’s expense by seeking to address such deficiencies and to meet the frustrated expectations of Python developers. Or as Lutz notes:

While Python developers were busy playing in their sandbox, web browsers mandated JavaScript, Android mandated Java, and iOS became so proprietary and closed that it holds almost no interest to generalist developers.

In parts of academia familiar with Python, languages like Rust and Julia are now name-dropped, although I doubt that many of those doing the name-dropping realise what they are in for if they decide to write everything in Rust. Meanwhile, Python 2 code is still used, against a backdrop of insistent but often ignored requests from systems administrators for people to migrate code to Python 3 so that newer operating system distributions can be deployed. In other sectors, such migration is meant to be factored into the cost of doing business, but in places like academia where software maintenance generally doesn’t get funding, no amount of shaming or passive-aggressive coercion is magically going to get many programs updated at all.

Of course, we could advocate that everybody simply run their old software in virtual machines or containers, just as was possible with that Applesoft BASIC program from over thirty years ago. Indeed, containerisation is the hot thing in places like academia just as it undoubtedly is elsewhere. But unlike the Apple II community who will hopefully stick with what they know, I have my doubts that all those technological lubricants marketed under the buzzword “containers!” will still be delivering the desired performance decades from now. As people jump from yesterday’s hot solution to today’s and on to tomorrow’s (Docker, with or without root, to Singularity/Apptainer, and on to whatever else we have somehow deserved), just the confusion around the tooling will be enough to make the whole exercise something of an ordeal.

A Detour to the Past

Over the last couple of years, I have been increasingly interested in looking back over the course of the last few decades, back to the time when I was first introduced to microcomputers, and even back beyond that to the age of mainframes when IBM reigned supreme and the likes of ICL sought to defend their niche and to remain competitive, or even relevant, as the industry shifted beneath them. Obviously, I was not in a position to fully digest the state of the industry as a schoolchild fascinated with the idea that a computer could seemingly take over a television set and show text and graphics on the screen, and I was certainly not “taking” all the necessary computing publications to build up a sophisticated overview, either.

But these days, many publications from decades past – magazines, newspapers, academic and corporate journals – are available from sites like the Internet Archive, and it becomes possible to sample the sentiments and mood of the times, frustrations about the state of then-current, affordable technology, expectations of products to come, and so on. Those of us who grew up in the microcomputing era saw an obvious progression in computing technologies: faster processors, more memory, better graphics, more and faster storage, more sophisticated user interfaces, increased reliability, better development tools, and so on. Technologies such as Unix were “the future”, labelled as impending to the point of often being ridiculed as too expensive, too demanding or too complicated, perhaps never to see the limelight after all. People were just impatient: we got there in the end.

While all of that was going on, other trends were afoot at the lowest levels of computing. Computer instruction set architectures had become more complicated as the capabilities they offered had expanded. Although such complexity, broadly categorised using labels such as CISC, had been seen as necessary or at least desirable to be able to offer system implementers a set of convenient tools to more readily accomplish their work, the burden of delivering such complexity risked making products unreliable, costly and late. For example, the National Semiconductor 32016 processor, seeking to muscle in on the territory of Digital Equipment Corporation and its VAX line of computers, suffered delays in getting to market and performance deficiencies that impaired its competitiveness.

Although capable and in some respects elegant, it turned out that these kinds of processing architectures were not necessarily delivering what was actually important, either in terms of raw performance for end-users or in terms of convenience for developers. Realisations were had that some of the complexity was superfluous, that programmers did not use certain instructions often or at all, and that a flawed understanding of programmers’ needs had led to the retention of functionality that did not need to be inscribed in silicon with all the associated costs and risks that this would entail. Instead, simpler, more orthogonal architectures could be delivered that offered instructions that programmers or, crucially, their compilers would actually use. The idea of RISC was thereby born.

As the concept of RISC took off, pursued by the likes of IBM, UCB and Sun, Stanford University and MIPS, Acorn (and subsequently ARM), HP, and even Digital, Intel and Motorola, amongst others, the concept of the workstation became more fully realised. It may have been claimed by some commentator or other that “the personal computer killed the workstation” or words to that effect, but in fact, the personal computer effectively became the workstation during the course of the 1990s and early years of the twenty-first century, albeit somewhat delayed by Microsoft’s sluggish delivery of appropriately sophisticated operating systems throughout its largely captive audience.

For a few people in the 1980s, the workstation vision was the dream: the realisation of their expectations for what a computer should do. Although expectations should always be updated to take new circumstances and developments into account, it is increasingly difficult to see the same rate of progress in this century’s decades that we saw in the final decades of the last century, at least in terms of general usability, stability and the emergence of new and useful computational capabilities. Some might well argue that graphics and video processing or networked computing have progressed immeasurably, these certainly having delivered benefits for visualisation, gaming, communications and the provision of online infrastructure, but in other regards, we seem stuck with something very familiar to that of twenty years ago but with increasingly disillusioned developers and disempowered users.

What we might take away from this historical diversion is that sometimes a focus on the essentials, on simplicity, and on the features that genuinely matter make more of a difference than just pressing ahead with increasingly esoteric and baroque functionality that benefits few and yet brings its own set of risks and costs. And we should recognise that progress is largely acknowledged only when it delivers perceptable benefits. In terms of delivering a computer language and environment, this may necessarily entail emphasising the stability and simplicity of the language, focusing instead on remedying the deficiencies of the underlying language technology to give users the kind of progress they might actually welcome.

A Dark Currency

Mark Lutz had intended to stop commentating on newer versions of Python, reflecting on the forces at work that makes Python what it now is:

In the end, the convolution of Python was not a technical story. It was a sociological story, and a human story. If you create a work with qualities positive enough to make it popular, but also encourage it to be changed for a reward paid entirely in the dark currency of ego points, time will inevitably erode the very qualities which made the work popular originally. There’s no known name for this law, but it happens nonetheless, and not just in Python. In fact, it’s the very definition of open source, whose paradigm of reckless change now permeates the computing world.

I also don’t know of a name for such a law of human behaviour, and yet I have surely mentioned such behavioural phenomena previously myself: the need to hustle, demonstrate expertise, audition for some potential job offer, demonstrate generosity through volunteering. In some respects, the cultivation of “open source” as a pragmatic way of writing software collaboratively, marginalising Free Software principles and encouraging some kind of individualistic gift culture coupled to permissive licensing, is responsible for certain traits of what Python has become. But although a work that is intrinsically Free Software in nature may facilitate chaotic, haphazard, antisocial, selfish, and many other negative characteristics in the evolution of that work, it is the social and economic environment around the work that actually promotes those characteristics.

When reflecting on the past, particularly during periods when capabilities were being built up, we can start to appreciate the values that might have been more appreciated at that time than they are now. Python originated at a time when computers in widespread use were becoming capable enough to offer such a higher-level language, one that could offer increased convenience over various systems programming languages whilst building on top of the foundations established by those languages. With considerable effort having been invested in such foundations, a mindset seemed to persist, at least in places, that such foundations might be enduring and be good for a long time.

An interesting example of such attitudes arose at a lower level with the development of the Alpha instruction set architecture. Digital, having responded ineffectively to its competitive threats, embraced the RISC philosophy and eventually delivered a processor range that could be used to support its existing product line-up, emphasising performance and longevity through a “15- to 25-year design horizon” that attempted to foresee the requirements of future systems. Sadly, Digital made some poor strategic decisions, some arguably due to Microsoft’s increasing influence over the company’s strategy, and after a parade of acquisitions, Alpha fell under the control of HP who sacrificed it, along with its own RISC architecture, to commit to Intel’s dead-end Itanium architecture. I suppose this illustrates that the chaos of “open source” is not the only hazard threatening stability and design for longevity.

Such long or distant horizons demand that newer developments remain respectful to the endeavours that have made them possible. Such existing and ongoing endeavours may have their flaws, but recognising and improving those flaws is more constructive and arguably more productive than tearing everything down and demanding that everything be redone to accommodate an apparently new way of thinking. Sadly, we see a lot of the latter these days, but it goes beyond a lack of respect for precedent and achievement, reflecting broader tendencies in our increasingly stressed societies. One such tendency is that of destructive competition, the elimination of competitors, and the pursuit of monopoly. We might be used to seeing such things in the corporate sphere – the likes of Microsoft wanting to be the only ones who provide the software for your computer, no matter where you buy it – but people have a habit of imitating what they see, especially when the economic model for our societies increasingly promotes the hustle for work and the need to carve out a lucrative niche.

So, we now see pervasive attitudes such as the pursuit of the zero-sum game. Where the deficiencies of a technology lead its users to pursue alternatives, defensiveness in the form of utterances such as “no need to invent another language” arises. Never mind that the custodians of the deficient technology – in this case, Python, of course – happily and regularly offer promotional consideration to a company who openly tout their own language for mobile development. Somehow, the primacy of the Python language is a matter for its users to bear, whereas another rule applies amongst its custodians. That is another familiar characteristic of human behaviour, particularly where power and influence accumulates.

And so, we now see hostility towards anything being perceived as competition, even if it is merely an independent endeavour undertaken by someone wishing to satisfy their own needs. We see intolerance for other solutions, but we also see a number of other toxic behaviours on display: alpha-dogging, personality worship and the cultivation of celebrity. We see chest-puffing displays of butchness about Important Matters like “security”. And, of course, the attitude to what went before is the kind of approach that involves boiling the oceans so that it may be populated by precisely the right kind of fish. None of this builds on or complements what is already there, nor does it deliver a better experience for the end-user. No wonder people say that they are jealous of colleagues who are retiring.

All these things make it unappealing to share software or even ideas with others. Fortunately, if one does not care about making a splash, one can just get on with things that are personally interesting and ignore all the “negativity from ignorant, opinionated blowhards”. Although in today’s hustle culture, this means also foregoing the necessary attention that might prompt anyone to discover your efforts and pay you to do such work. On the actual topic that has furnished us with so many links to toxic behaviour, and on the matter of the venue where such behaviour is routine, I doubt that I would want my own language-related efforts announced in such a venue.

Then again, I seem to recall that I stopped participating in that particular venue after one discussion had a participant distorting public health observations by the likes of Hans Rosling to despicably indulge in poverty denial. Once again, broader social, economic and political influences weigh heavily on our industry and communities, with people exporting their own parochial or ignorant views globally, and in the process corrupting and undermining other people’s societies, oblivious to the misery it has already caused in their own. Against this backdrop, simple narcissism is perhaps something of a lesser concern.

At the End of the Tunnel

I suppose I promised some actionable observations at the start of the article, so what might they be?

Respect Users and Investments

First of all, software developers should be respectful towards the users of their software. Such users lend validation to that software, encourage others to use it, and they potentially make it possible for the developers to work on it for a living. Their use involves an investment that, if written off by the developers, is costly for everyone concerned.

And no, the users’ demands for that investment to be protected cannot be disregarded as “entitlement”, even if they paid nothing to acquire the software, at least if the developers are happy to enjoy all the other benefits of the software’s proliferation. As is often said, power and influence bring responsibility. Just as democratically elected politicians have a responsibility towards everyone they represent, regardless of whether those people voted for them or not, software developers have a duty of care towards all of their users, even if it is merely to step out of the way and to let the users take the software in its own direction without seeking to frustrate them as we saw when Python 2 was cast aside.

Respond to User Needs Constructively

Developers should also be responsive to genuine user needs. If you believe all the folklore about the “open source” way, it should have been precisely people’s own genuine needs that persuaded them to initiate their own projects in the first place. It is entirely possible that a project may start with one kind of emphasis and demand one kind of skills only to evolve towards another emphasis or to require other skills. With Python, much of the groundwork was laid in the 1990s, building an interpreter and formulating a capable language. But beyond that initial groundwork, the more pressing challenges lay outside the language design domain and went beyond the implementation of a simple interpreter.

Improved performance and concurrency, both increasingly expected by users, required the application of other skills that might not have been present in the project. And yet, the elaboration of the language continued, with the developers susceptible to persuasion by outsiders engaging in “alpha-dogging” or even insiders with an inferiority complex, being made to feel that the language was not complete or even adequate since it lacked features from the pet languages of those outsiders or of the popular language of the day. Development communities should welcome initiatives to improve their projects in ways that actually benefit the users, and they should resist the urge to muscle in on such initiatives by seeking to demonstrate that they have the necessary solutions when their track record would indicate otherwise. (Or worse still, by reframing user needs in terms of their own narrow agenda as if to say, “Here is what you are really asking for.” Another familiar trait of the “visionary” desktop developer.)

Respect Other Solutions

Developers and commentators more generally should accept and respect the existence of other technologies and solutions. Just because they have their own favourite solution does not de-legitimise something they have just been made aware of. Maybe it is simply not meant for them. After all, not everything that happens in this reality is part of a performance exclusively for any one person’s benefit, despite what some people appear to think. And the existence of other projects doing much the same thing is not necessarily “wasted effort”: another concept introduced from some cult of economics or other.

It is entirely possible to provide similar functionality in different ways, and the underlying implementations may lend those different projects different characteristics – portability, adaptability, and so on – even if the user sees largely the same result on their screen. Maybe we do want to encourage different efforts even for fundamental technologies or infrastructure, not because anyone likes to “waste effort”, but because it gives the systems we build a level of redundancy and resilience. And maybe some people just work better with certain other people. We should let them, as opposed to forcing them to fit in with tiresome, exploitative and time-wasting development cultures, to suffer rudeness and general abuse, simply to go along with an exercise that props up some form of corporate programme of minimal investment in the chosen solution of industry and various pundits.

Develop for the Long Term and for Stability

Developers should make things that are durable so that they may be usable for many years to come. Or they should at least expect that people may want to use them years or even decades from now. Just because something is old does not mean it is bad. Much of what we use today is based on technology that is old, with much of that technology effectively coming of age decades ago. We should be able to enjoy the increased performance of our computers, not have it consumed by inefficient software that drives the hardware and other software into obsolescence. Technological fads come and go (and come back again): people in the 1990s probably thought that virtual reality would be pervasive by now, but experience should permit us to reflect and to recognise that some things were (and maybe always will be) bad ideas and that we shouldn’t throw everything overboard to pander to them, only to regret doing so later.

We live in a world where rapid and uncomfortable change has been normalised, but where consumerism has been promoted as the remedy. Perhaps some old way of doing something mundane doesn’t work any more – buying something, interacting with public agencies, fulfilling obligations, even casting votes in some kinds of elections – perhaps because someone has decided that money can be saved (and, of course, soon wasted elsewhere) if it can be done “digitally” from now on. To keep up, you just need a smartphone, or a newer smartphone, with an “app”, or the new “app”, and a subscription to a service, and another one. And so on. All of that “works” for people as long as they have the necessarily interest, skills, time, and money to spend.

But as the last few years have shown, it doesn’t take much to disrupt these unsatisfactory and fragile arrangements. Nobody advocating fancy “digital” solutions evidently considered that people would not already have everything they need to access their amazing creations. And when, as they say, neither love nor money can get you the gadgets you need, it doesn’t even matter how well-off you are: suddenly you get a downgrade in experience to a level that, as a happy consumer, you probably didn’t even know still existed, even if it is still the reality for whole sections of our societies. We have all seen how narrow the margins are between everything apparently being “just fine” and there being an all-consuming crisis, both on a global level and, for many, on a personal level, too.

Recognise Responsibilities to Others

Change can be a positive thing if it carries everyone along and delivers actual progress. Meanwhile, there are those who embrace disruption as a form of change, claiming it to be a form of progress, too, but that form of change is destructive, harmful and exclusionary. It should not be a surprise that prominent advocates of a certain political movement advocate such disruptive change: for them, it doesn’t matter how many people suffer by the ruinous change they have inflicted on everyone as long as they are the ones to benefit; everyone else can wait fifty years or so to see some kind of consolation for the things taken from them, apparently.

As we deliver technology to others, we should not be the ones deepening any misery already experienced by imposing needless and costly change. We should be letting people catch up with the state of technology and allowing them to be comfortable with it. We should invest in long-term solutions that address people’s needs, and we should refuse to be shamed into playing the games of opportunists and profiteers who ridicule anything old or familiar in favour of what they happen to be promoting today. We should demand that people’s investments in hardware and software be protected, that they are not effectively coerced into constantly buying new things and seeing their their living standards diminished in other ways, with such consumption burdening our planet’s ecosystem and resources.

Just as we all experience that others have power over us, so we might recognise the power we have over other people. And just as we might expect others to consider our interests, so we might consider the interests of those who have to put up with our decisions. Maybe, in the end, all I am doing is asking for people to show some consideration for the experiences of other people, that their lives not be made any harder than they might already be. Is that really too much to ask? Is that so hard to understand?

Friday, 29 April 2022

Poppler finally has support for embedding fonts in PDF files!

 Why would you want to embed fonts in PDF files are you probably asking yourself?

Short answer: It fixes issues when adding text to the PDF files.

Long answer:

Poppler has had the feature of being able to fill in forms, create annotations and more recently add Digital Signatures to existing PDF files.

This works relatively well if you limit yourself to entering 'basic' ASCII characters, but once you go to more 'complex' characters, things don't really work, from the outside it seems like it should be relatively simple to fix, but things related to PDF are never as simple as they may seem.

In PDF each bit of text is associated with a Font object. That Font generally only supports one kind of text encoding and at most 'only' 65535 characters (65535 may seem a lot, but once you start taking into account non latin-based languages, you quickly 'run out' of characters).

What Poppler used to do in the past was just save the text in the PDF file and say "This text is written in Helvetica font", without even really care to specify much what 'Helvetica font' meant,  and then let the PDF viewer (remember when we save the PDF file, it will not only be rendered by Poppler again, but potentially by Adobe Reader, Chrome, Firefox, etc.) try to figure out what to do with that information, which as said usually didn't go very well for the more 'complex' characters.

What we do now is for each character of new text that we add to the file is we make sure to embed a font for it. So if you're writing something like 'holaħŋ↓' we may end up adding a few fonts to the PDF file, and then instead of saying 'This is the text and it's in Helvetica, good luck', we will say something like 'This text is characters 4, 67, 83 and 98 of embedded Font X, characters 4 and 99 of embedded Font X2 and character 16574 of embedded Font X3'. This way when the file is opened by a PDF viewer it is 'very easy' for them to do the right thing and show what we wanted.

Enough of technical talk! Now some screenshots to show how this has been fixed for Text Annotations, Forms and Signatures :)

Writing "hello↓漢you" to a form

Before

imatge 

Now

 imatge 

Signing a PDF file with my name being "Albeŋŧ As漢tals Ciđ"

Before

image 

Now

image 

 

Writing hola↓漢字 in a Text Annotation

Before

 

 Now

Monday, 25 April 2022

Docker2Caddy - An automatic Reverse Proxy for Docker containers

So you have a number of Docker containers running web services which you would like to expose to the outside? Well, you probably will at least have considered a reverse proxy already. Doing this manually for one, two or even five containers may be feasible, but everything above that will be a PITA for sure. At the FSFE we ran into the same issue with our own distributed container infrastructure at and crafted a neat solution that I would like to present to you in the next few minutes.

The result is Docker2Caddy that provides a workflow in which you can spin up new containers anytime (e.g. via a CI) and the reverse proxy will just do the rest for you magically.

The assumptions

Let’s assume you want to go with reverse proxies to make your web services accessible via ports 80 and 443. There are other possibilities, and in more complex environments there may be already integrated solutions, but for this article we’ll wade in a rather simple environment spun up with docker-compose1.

Let’s also assume you care about security and go with a rootless installation of Docker. So the daemon will run as an unprivileged user. That’s possible but much more complex than the default rootful installation2. Because of this, a few other solutions will not work, we’ll check that later.

Finally, each container shall at least have one separate domain assigned to it for which you obviously want to have a valid certificate, e.g. by Let’s Encrypt.

In the examples below, we have two containers running, each running a webserver listening to port 8080. The first container shall be available via first.com, the second via second.net. The latter shall also be available via www.second.net.

The problems

In the described scenario, there are a number of problem for automating the configuration of the reverse proxy in order to direct a domain to the correct container, starting with container discovery to IPv6 routing to handling offline containers.

The reverse proxy has to be able to discover the currently running containers and ideally monitor for changes regularly so that a newly created container with a new domain is reachable within a short time without manual intervention.

Before Docker2Caddy we have used nginx-proxy combined with acme-companion (formerly known as docker-letsencrypt-nginx-proxy-companion). These are Docker containers that query all containers connected to the bridge Docker network. For this to work, the containers have to run with environment variables indicating the desired domains and local ports that shall be proxied.

In a rootless Docker setup this finally reaches its limits although discovery still works. But already before that we did not like the fact that we had to connect containers to the bridge network upon creation and therefore lost a bit more isolation (which is dubious in Docker anyway).

Now, with rootless, IPv6 was the turning point. Even in rootful Docker setups, IPv6 – a 20+ years old, well defined standard protocol – is a pain in the butt. But with rootless, the FSFE System Hackers team did not manage to get IPv6 working in containers to the degree that we needed. While IPv6 traffic reached the nginx-proxy, it was then treated as IPv4 traffic with the internal Docker IP address. That bits you ultimately if you limit requests based on IP addresses, e.g. for signups or payments. All traffic via IPv6 will be treated as the same internal IPv4 address, therefore triggering the limits regularly.

The easiest solution therefore is to use a reverse proxy running on the host system, not as a Docker container with its severe limitations. While the first intuition lead us to nginx, we decided to go with Caddy. The main advantages we saw are that a virtual host in Caddy is very simple to configure and that TLS certificates are generated and maintained automatically without extra dependencies like certbot.

In this setup, containers would need to open their webserver port to the host. This “public” port has to be unique per host, but the internal port can stay the same, e.g. port 1234 could be mapped to port 8080 inside the container. In Caddy you would then configure the domain first.org to forward to localhost:1234. A more or less identical second example container could then expose the port 5678 to the host, again listen on 8080 internally, and Caddy would redirect second.net and www.second.net to localhost:5678.

But how does Caddy know about the currently running containers and the ports via which they want to receive traffic? And how can we handle containers that are unavailable, for instance because they crashed or have been deleted for good? Docker2Caddy to the rescue!

The solution

I already concluded that Caddy is a suitable reverse proxy for the outlined use case. But in order to be care-free, the configuration has to be generated automatically. For this to work, I wrote a rather simple Python application called Docker2Caddy that is kept running in the background via a systemd service and writes proper logs that are also rotated nicely.

This is how it works internally: it queries (in a configurable interval) the Docker daemon for running containers. For each container it looks for specific labels (that are also configurable), by default proxy.host, proxy.host_alias and proxy.port. If one or multiple containers are found – in our case two – one Caddy configuration file per container is created. This is based on a freely configurable Jinja2 template. If the configuration changed, e.g. by a new host, Caddy will be reloaded and will create a TLS certificate if needed.

But what happens if a container is unavailable? In Docker2Caddy you can configure a grace period. Until this is reached, the Caddy configuration for the container in question is not removed but could forward to a local or remote error page. Only afterwards, the configuration is removed, and Caddy reloaded subsequently.

So, what makes Docker2Caddy special? I am biased but see a number of points:

  1. Simplicity: fundamentally it’s a 188 pure lines of code Python script.
  2. Configurability: albeit it’s simplicity, it’s easy to configure for various needs thanks to the templates and the support for rootless Docker setups.
  3. Adaptability: it should be rather simple to make Docker2Caddy also work for Podman, or even use different reverse proxies. Feel free to extend it before I’ll do it myself someday ;)
  4. Performance: while I did not perform before/after benchmarks, Caddy is blazingly fast and will surely perform better on the host than in a limited Docker container.

If you’re facing the same challenges in your setup, please feel free to try it out. Installation is quite simple and there’s even a minimal Ansible playbook. If you have feedback, I appreciate reading it via comments on Mastodon (see below), email, or, if you have an FSFE account, as a new issue or patch at the main repo.


  1. This is how a very minimal Docker service in the FSFE infrastructure looks like. For Docker2Caddy, only the docker-compose.yml file with its labels is relevant. ↩︎

  2. If you’re interested in setting this up via Ansible, I can recommend the ansible-docker-rootless role which we integrated in our full-blown playbook for the container servers. ↩︎

Friday, 08 April 2022

Short history of the "What is Free Software (Open Source)?" video

In February 2020, I was giving a talk titled "The core values of software freedom" at FOSDEM's largest auditorium (video recording). It was great to talk to such a large audience and have all those great discussions afterwards. Briefly afterwards, in March 2020, I gave the same talk at FOSS Backstage (video recording), especially enjoying the Q&A afterwards. Unfortunately, then the pandemic hit Europe, and it was my last conference in person for that year. So the next months I heavily missed having in person discussions with people I know and with new people I could have met at conferences.

Nevertheless, I had great online discussions about the topic of the talk, which encouraged me to think about how we can condense the message of the talk further to reach more people with it -- maybe with a short video similar to our "Public Money? Public Code!" video. When one person, who already before sent me kudos for my FOSDEM talk, heard about it, he offered to make a larger donation to cover the costs for such a video.

Alexander Lehmann, who also created the FSFE's "Public Money? Public Code!" video, was available with his team to work on the implementation of the video. It was a great pleasure to work with them on the video and find a way how to condense a 30-minutes talk into a short video.

We published this short "The Core Values of Software Freedom" video during the FSFE's 20-year anniversary, which was also the introduction of the FSFE's self-hosted peertube instance.

Afterwards, we received some feedback that people like the video, but would prefer an even shorter one and suggested shortening the existing video and make a few adjustments. After many people in the FSFE core team agreed to this, we again worked with Alexander Lehmann on the adjustments.

This week we published it: a video explaining the essential four rights to use, study, share, and improve software. Rights that help to support other fundamental freedoms like freedom of speech, press, and privacy. And all of that in less than 3 minutes, so the video can easily be shared whenever you want to quickly explain the topic to others.

Please share the video with friends, colleagues, and the public on different channels, embed it on your website with the provided code snippets, let us know how you like it by commenting, and if you see it on platforms which you are using, by rating the video there.

By doing this, you increase the chance that the video will be seen and recommended to people who have never heard of software freedom before. Help them learn what Free Software is, in less than 3 minutes!

Thank you.

Thursday, 07 April 2022

Shakedown cruise on the Baltic Sea

Just in time for a new cruising season to start, the story of our 2021 Baltic shakedown cruise is now online.

In Swedish archipelago Sailing in the Baltic

This was a 666NM trip that we did on our new-to-us Amigo 40 cruising boat in August-September 2021. Apart from engine trouble in the beginning, this was a very enjoyable little adventure on the coasts of Sweden and Bornholm.

Trip route

The trip even earned us the first prize in the cruising log contest of our sailing club:

Fartenseglerpreise

Read the story now.

Thursday, 31 March 2022

What’s in a pronoun?

Today is Transgender Day of Visibility, and I am nonbinary Transgender.

I recently told people that I now prefer they/them, or any other gender neutral pronoun, such as spivak (e). But he/she is OK, depending on context too. Since I still go by my masculine name, most people use masculine pronouns.

In German there is no standardized neuter pronoun, I go by er/sie but strongly prefer er. By contrast Finnish has no binary gendered pronous, the only one is hän, a gender neutral pronoun. When I started learning Finnish I had difficulty translating that pronoun. The use of hän does not misgender anyone, wheather they are nonbinary or not. German has ternary grammitical gender, er (masculine), sie (feminine) and es (neuter).
The neuter pronoun is not commonly used to refer to a person, only tho things and persons in some cases.

Somtimes I am a women, sometimes a men, often both at the same time, so both pronouns are correct for me, so strictly speaking using one of those pronouns is not misgendering in my case. Still I prefer neutral pronous, as those do not misgender anyone.

I mostly present masculine, but I do have a feminine voice, so that I can pass as a woman on the phone. I consider my voice more androgynous or in the upper part of the male range, which overlaps with the lower female range. When I first passed as a women that way I was unware of being Transgender and told poeple my masculine name, they were confused, because they were expecing a person with a masculine name not to be a women. At that time, I began cosplaying female characters, wearing red lipstick and nail polish.

Later after more people I know came out as trans, or nonbinary, I knew that I was nonbinary too and began exploring the use of different names, using different pronouns for each name. I was a grammer geek long before, and once in Star Trek when Riker tried to avoid personal pronouns, I liked that.

Tobias Alexandra Platen (he/she/they) or short Alex (they/them only)

Wednesday, 16 March 2022

Dutch digital identity system crisis

Nederlandse versie

Dutch digital identity verification system DigiD has announced the phasing out SMS as second factor. That way they require citizens to install a smartphone app in order to use digital services from the government, municipalities, the health sector and others. These applications only work on iOS and Android phones, with reliance on third party services.

Plenty of members of our community choose not to use a device that is tied to vendor-specific services. There is a threat our community will practically be locked out of the digital infrastructure the government has set up for us to use. Official alternatives are to ask a friend with the app for help or go back to snail mail and physical meetings.

This is an urgent matter with a big impact, so if you share my concern, please make your voice heard to policymakers.

The commission Digital Affairs will meet on the 22nd of March to discuss the digital government which includes the topic of identity systems. I’ve written to members of this commission to call attention to this issue and share the views of our community.

In the summer of 2021 I received a letter from my municipality that it was time to renew my driving license. The letter mentioned two ways of getting a renewal: either physically visit city hall or use the experimental digital process that has been around since 2018. More information on this digital process can be found on the dedicated Dutch webpage.

 

The first time I heard of this experiment was a couple of years ago at my local photographer, who was one of the first photographers to take part in this trial. Certified photographers act as the main point of contact in the process by ensuring a good photograph and identifying the citizen making the request. My local photographer was excited to take part in this experiment to help ease the process for customers. I too was excited because this seemed like a well thought-out process that would reduce the number of contacts and visits to get a driving license renewal.

So now, a couple of years later it was time for my renewal and it was about to experience how far our digital governmental services have come. I started the process by going to rijbewijsaanvragen.rdw.nl and I was immediately redirected to an DigiD prompt. DigiD is the login solution the Dutch government develops and uses. More information is available on Wikipedia and on the official website. Years ago DigiD was just using a username and password for verification, a single factor. Then SMS authentication was added as a possible second factor of authentication for improved security. Later a dedicated app was created for using your smarthone as a second factor, relying on the security features of the operating system. More recently the ability called check-id was added to apps read the NFC chip of identity cards and use that as the basis for authentication. More information on the identity card login method is available on the website.

When trying to start the digital request, this time the DigiD prompt didn’t show the SMS authentication option I would normally use. I could choose between the DigiD app and the option to read the NFC chip from the identity card. I was was baffled and assumed I had perhaps made a mistake. Carefully tracking my steps I retried but again I was faced with the same prompt.

 

 

Doing some more research, I found out that SMS was not considered safe enough for this application, and so this project was set up to at least require an installed DigiD App as second factor, or the use of an NFC readout of your ID-card.

I actually didn’t want to install the DigiD Android app, despite having a Nokia 8.1 smartphone with Google One Android on it. My previous phone was a Fairphone 2 with Fairphone Open OS, the Google-free Android version by Fairphone. Having experienced the Google-free Android, I’ve become aware how much apps rely on Google libraries and service to function. It had taken me quite some experimentation to move my app usage over to app that did not rely on Google Services. As I was considering another Google-free Android phone as my next phone, I didn’t want to commit myself to using an app that relied on Google to function, which the DigiD App does. Also I was looking towards a Linux phone like the Pinephone with Mobian, which would move me even further away from the Android app ecosystem.

I looked on the DigiD website for suggestions for this situation. The official recommendation is to ask somebody else with the DigD app for help. I couldn’t believe what I was reading. My government was now the single strongest force pulling me in the vendor-tied smartphone ecosystem I resent. I had already read about SMS planned to be phased out (Dutch article by Tweakers.net) and how the government is fuelling the Google and Apple duopoly (Dutch article, Archive.is), but being faced with it in real life made it so much more real and urgent. Already in 2018 when I was using the Fairphone, I emailed the DigiD if the if the DigiD app could be provided outside of the Google Play store, but got an answer that that was not possible.

In contrast, the situation in Germany is quite the opposite. AusweisApp2 is the German identification app, which is available in F-Droid, Debian and many other Free Software repositories. All of this is made possible because the source code is provided under a Free Software license (EUPL v1.2). This allowed the community to make the application available on many different platforms. The AusweisApp2 uses the chip in the identity card or passport as the basis for identity. So the app merely has to facilitate in communications with online services. Compared to apps like DigiD that act as a digital identity directly, only having to relay information reduces the security requirements. And without the reliance on vendor-specific crypto libraries it is easier to open up the code for transparency and collaboration as the Germans have done.

I decided I would stand by my principle of not installing the app and try to see what I could achieve. Worst-case I had to go back to the physical process I had done the last time I got my driving license. So I reached out to the RDW team responsible for this digital process which was still called an experiment despite being a couple of years in use already. I explained my situation, mentioned that I was not willing to ask anybody for help because I didn’t want to be relying on others for my digital services, and I asked about alternatives. I got a formal reply repeating what I already read online: it was not possible without the app.

In the mean time there was also a desktop application available to read out the NFC chip of an identity card. This app is only available through the Windows 10 app store. With all my computers running Debian or Ubuntu, that was no option for me. Even besides the fact that I didn’t have an ID-card with a NFC-chip in it to actually identify with. So unless the government starts releasing the applications for different operating systems, I don’t see this as a solution for me either.

 

Not having a solution that I could use by myself without relying on Google, I resorted to the traditional physical process. I went to my local photographer to get my picture taken, the same one that had told me about the digital process a few years earlier. He asked my if I wanted to use the digital process after I mentioned my picture was for my driving license renewal. I replied I didn’t want to make use of that because I didn’t want to install the app. And so I got my pictures in analog format, rather than them being sent digitally to the correct agency. Later I went to city hall to hand over my photograph and sign the papers requesting the renewal. A couple of days I went back to city hall to pick up my new driving license, and that was that.

Compared to the digital process it took me one more trip to city hall to file the request and it took some more paperwork. For a single case this wasn’t so bad and it was something to overcome. But with SMS planned to be phased out in 2022 the impact would be much greater. Most online public services require a second factor of authentication now, and more and more services are becoming digital. Tax registration is one of the services that still allows authentication without a second factor of authentication, but for how long? Dealing with public services without the DigiD app will become increasingly difficult, and that is why we need a solution that meets the ‘vendor-neutral’ and ‘open’ principles that our government itself is calling for.

The Dutch DigiD app acts as the source of identity and thus relies on the frameworks by Apple and Android to guarantee a trustworthy identity. To ever achieve a Free Software app in the Netherlands we should not rely on the locked-down operating systems and libraries of vendors to provide security guarantees. Like in Germany, relying on an identification chip in hardware can provide the trust a government needs without introducing this reliance. Another solution might be the IRMA app which relies partly on online connectivity for its security. IRMA has an active community in the Netherlands consisting of public bodies like municipalities and several companies needing a secure and accessible means of authentication. Regardless of the technical solution we end up with, it is important that it is vendor-neutral, free software, based on open standards and open for community contributions like operating system support. In 2020 Waag together with other organizations has already pushed for these values in the #goedID campaign.

It worries me that our government so far seems inconsiderate for our stance. The information on the website seems to imply that if you don’t have a Google Android or Apple smartphone you lack digital skills and fall into the same category as the elderly. Our community is quite the contrary. Exactly because we are so skilled and knowledgeable we avoid corporate dependence where we can. We need to make our voices heard and let the government know that we expect them to step up their game. In the last couple of years our community has shown in the Netherlands the willingness and ability to cooperate. For example by contributing to open source applications like the Covid tracing and QR-code apps and by making them available on F-Droid. So let’s keep that spirit of collaboration and call out the government on the current crisis they created and demand a solution that meets our values.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

                Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Paul Boddie's Free Software-related blog  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  English – nico.rikken’s blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nikos Roussos - opensource  Planet FSFE on irl.xyz  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Thoughts of a sysadmin (Posts about planet-fsfe)  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog