Thoughts of the FSFE Community

Wednesday, 15 November 2017

Linking hackerspaces with OpenDHT and Ring - fsfe | 19:57, Wednesday, 15 November 2017

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

KVM-virtualization on ARM using the “virt” machine type

Daniel's FSFE blog | 17:20, Wednesday, 15 November 2017


A while ago, I described how to run KVM-based virtual machines on libre, low-end virtualization hosts on Debian Jessie [1]. For emulating the ARM board, I used the vexpress-a15 which complicates things as it requires the specification of compatible DTBs. Recently, I stepped over Peter’s article [2] that describes how to use the generic “virt” machine type instead of vexpress-a15. This promises to give some advantages such as the ability to use PCI devices and makes the process of creating VMs much easier.

As it was also reported to me that my instructions caused trouble on Debian Stretch (virt-manager generates incompatible configs when choosing the vexpress-a15 target). So I spent some time trying to find out how to run VMs using the “virt” machine type using virt-manager (Peter’s article only described the manual way using command-line calls). This included several traps, so I decided to write up this article. It gives a brief overview how to create a VM using virt-manager on a ARMv7 virtualization host such as the Cubietruck or the upcoming EOMA68-A20 computing card.


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Tested VMs

I managed to successfully create and boot up the following VMs on a Devuan (Jessie) system:

  • Debian Jessie Installer
  • Debian Stretch Installer
  • Debian Unstable Installer
  • Fedora Core 27 (see [3] for instructions how to obtain the necessary files)
  • Arch Linux, using the latest ARMv7 files available at [4]
  • LEDE

I was able to reproduce the steps for the Debian guests on a Debian Stretch system as well (I did not try with the other guests).

Requirements / Base installation

This article assumes you have setup a working KVM virtualization host on ARM. If you don’t, please work through my previous article [1].

Getting the necessary files

Depending on the system you want to run in your Guest, you typically need an image of the kernel and the initrd. For the Debian-unstable installer, you could get the files like this:

wget -O vmlinuz-debian-unstable-installer


wget -O initrd-debian-unstable-installer.gz

Creating the Guest

Now, fire up virt-manager and start the wizard for creating a new Guest. In the first step, select “Import existing disk image” and the default settings, which should use the Machine Type “virt” already:

In the second step, choose a disk image (or create one) and put the paths to the Kernel and Initrd that you downloaded previously. Leave the DTB path blank and put “console=ttyAMA0″ as kernel arguments. Choose an appropriate OS type of just leave the default (may negatively impact the performance of your guest, or other things may happen such as your virtual network card may not be recognized by the installer):

Next, select memory and CPU settings as required by your Guest:

Finally, give the VM a proper name and select the “Customize configuration before install” option:

In the machine details, make sure the CPU model is “host-passthrough” (enter it manually if you can’t select it in the combo box):

In the boot options tab, make sure the parameter “console=ttyAMA0″ is there (otherwise you will not get any output on the console). Depending on your guest, you might also need more parameters, such as for setting the rootfs:

Finally, click “begin installation” and you should see your VM boot up:

Post-Installation steps

Please note, that after installing your guest, you must extract the kernel and initrd from the installed guest image (you want to boot the real system, not the installer) and change your VM configuration to use these files instead.

Eventually, I will provide instructions how to do this for a few guest types in the future. Meanwhile, you can obtain instructions for extracting the files from a Debian guest from Peter’s article [2].


Monday, 13 November 2017

Software freedom in the Cloud

English on Björn Schießle - I came for the code but stayed for the freedom | 22:00, Monday, 13 November 2017

Looking for Freedom

How to stay in control of the cloud? - Photo by lionel abrial on Unsplash

What does software freedom actually means, in a world where more and more software no longer runs on our own computer but in the cloud? I keep thinking about this topic for quite some time and from time to time I run into some discussions about this topic. For example a few days ago at Mastodon. Therefore I think it is time to write down my thoughts on this topic.

Cloud is a huge marketing term which can actually mean a lot. In the context of this article cloud is meant as something quite similar to SaaS (software as a service). This article will use this terms interchangeable, because this are also the two terms the Free Software community uses to discuss this topic.

The original idea of software freedom

At the beginning every software was free. In the 80s, when computer become widely used and people start to make software proprietary in order to maximise their profit, Richard Stallman come up with a incredible hack. He used copyright to reestablish software freedom by defining these four essential freedoms:

  1. The freedom to run the software for every purpose
  2. The freedom study how the program works and adapt it to your needs
  3. The freedom to distribute copies
  4. The freedom to distribute modified versions of the program

Every software licensed in a way that grants the user this four freedoms is called Free Software. This are the basic rules to establish software freedom in the world of traditional computing, where the software runs on our own devices.

Today almost no company can exist without using at least some Free Software. This huge success was possible due to a pragmatic move by Richard Stallman, driven by a vision on how a freedom respecting software world should look like. His idea was the starting point for a movement which come up with a complete new set of software licenses and various Free Software operating systems. It enabled people to continue to use computers in freedom.

SaaS and the cloud

Today we no longer have just one computer. Instead we have many devices such as smart phones, tablets, laptops, smartwatches, small home servers, IoT devices and maybe still a desktop computer at our office. We want to access our data from all this devices and switch during work between the devices seamlessly. That’s one of the main reasons why software as a service (SaaS) and the cloud became popular. Software which runs on a server and all the devices can connect to it. But of course this comes with a price, it means that we are relaying more and more on someones else computer instead of running the programs on our own computer. We lose control. This is not completely new, some of this solutions are quite old, others are rather new, some examples are mail servers, social networks, source code hosting platforms, file sharing services, platforms for collaborative work and many more. Many of this services are build with Free Software, but the software only runs on the server of the service provider and so the freedom never arrives at the user. The user stays helpless. We hand over the data to servers we don’t control. We have no idea what happens to our data and for many services we have no way to get our data again out of the service. Even if we can export the data we are often helpless because without the software which runs the original service, we can’t perform the same operations on our own servers.

We can’t turn back the time

We can’t stop the development of such services. History tells us that we can’t stop technological progress, whether we like it or not. Telling people not to use it will not have any notable impact. Quite the opposite, we the Free Software movement would lose the reputation we build over the last decades and with it any influence. We would no longer be able to change things for the better. Think again what Richard Stallman did about thirty years ago. He grew up in a world where software was free by default. When computers become a mass market product more and more manufactures turned software into a proprietary product. Instead of developing the powerful idea of Free Software, Richard Stallman could have decided to no longer use this modern computers and ask people to follow him? But would have many people joined him? Would it have stopped the development? I don’t think so. We would still have all the computers as we know them today, but without Free Software.

That’s why I strongly believe that, like thirty years ago, we need again a constructive and forward looking answer to the new challenges, brought to us by the cloud and SaaS. We, the Free Software community, need to be the driving force to lead this new way of computing into a way that respect the users freedom. Same as Richard Stallman did it back then by starting the Free Software movement. All this is done by people, so it’s people like us who can influence it.

Finding answers to this questions requires us to think in new directions. The software license is still the corner stone. Without the software being Free Software everything else is void. But being Free Software is by now means enough to establish freedom in the world of the cloud.

What does this mean to software freedom?

Having a close look at cloud solutions, we realise that it contains most of the time two categories of software. Software that runs on the server itself and software served by the server but executed on the users computer, so called JavaScript.

Following the principle of the well established definition of software freedom, the software distributed to the user needs to be Free Software. I would call this the necessary precondition. But by just looking at the license of the JavaScript code we are trying to solve today’s problems with the tools of the past, completely ignoring that in the world of SaaS your computer is no longer the primary device. Getting the source code of the JavaScript under a Free Software license is nice but it is not enough to establish software freedom. The JavaScript is tightly connected to the software which runs of the server so users can’t change it a lot without breaking the functionality of the service. Further, with each page reload the user gets again the original version of the JavaScript. This means that, with respect to the users freedom, access to the JavaScript code alone is insufficient. Free JavaScript has mainly two benefits: First, the user can study the code and learn how it works and second, maybe reuse parts of it in their own projects. But to establish real software freedom a service needs to fulfil more criteria.

The user needs access to the whole software stack, both the software which runs on the server and the software which runs the browser. Without the right to use, study, share and improve the whole software stack, freedom will not be possible. That’s why the GNU AGPLv3 is incredible important. Without going into to much details, the big difference is how the license defines the meaning of “distribute”. This term is critical to the Free Software definition. It defines at which point the rights to use, study, share and improve the software gets transferred to a user. Typically that happens when the user gets a copy of the software. But in the world of SaaS you no longer get a real copy of the software, you just use it over a network connection. The GNU AGPLv3 makes sure that this kind of usage already entitles you to get the source code. Only if both, the software which runs on the server and the software which runs on the browser is Free Software, users can start to consider exercising their freedom. Therefore my minimal definition of freedom respecting services would be that the whole software stack is Free Software.

But I don’t think we should stop here. We need more in order to drive innovation forward in a freedom respecting way. This is also important because various software projects already work on it. Telling them that these extra steps are only “nice to have” but not really important sends the wrong message.

If the whole software stack is Free Software we achieved the minimum requirement to allow everyone to set up their own instance. But in order to avoid building many small islands we need to enable the instances to communicated with each other. A feature called federation. We see this already in the area of freedom respecting social networks or in the area of file sync and share. About a year ago I wrote an article, arguing that this is a feature needed for the next generation code hosting platforms as well. I’m happy to see that GitLab started to look into exactly this. Only if many small instances can communicate with each other, completely transparent for the user so that it feels like one big service, exercising your freedom to run your own server becomes really interesting. Think for a moment about the World Wide Web. If you browse the Internet it feels like one gigantic universe, the web. It doesn’t matter if the page you navigate to is located at the same server or on a different ones, thousands of kilometres away from each other.

If we reach the point where the technology is build and licensed in a way that people can decide freely where to run a particular service, there is one missing piece. We need a way to migrate from one server to another. Let’s say you start using a service provided by someone but at some point you want to move to a different provider or decide to run your own server. In this case you need a way to export your data from the first server and import it to the new one. Ideally in a way which allows you to keep the connection to your friends and colleagues, in case of a service which provides collaboration or social features. Initiatives like the User Data Manifesto thought already about it and gave some valuable answers.


How do we achieve practical software freedom in the world of the cloud? In my opinion this are the corner stones:

  1. Free Software, the whole software stack, this means software which runs on the server and on the users browser, needs to be free. Only then people can exercise their freedom.

  2. Control, people need to stay in control of their data and need to be able to export/import them in order to move.

  3. Federation, being able to exercise your freedom to run your own instance of a service without creating small islands and losing the connection to your friends and colleagues.

This is my current state of thinking, with respect to this subject. I’m happy to hear more opinions about this topic.

Tags: #FreeSoftware #cloud #saas #freedom

Introducing: forms

free software - Bits of Freedom | 17:30, Monday, 13 November 2017

Introducing: forms

In this post, I will introduce you to the FSFE's forms API, a way to send emails and manage sign-ups on web pages used in the FSFE community.

For our Public Money, Public Code campaign, as well as for two of our other initiatives which launched not too long ago (Save Code Share and REUSE), we needed a way to process form submissions from static web sites in a way which allowed for some templating and customisation for each web site.

This is not a new problem by any means, and there are existing solutions like formspree and Google Forms which allows you to submit a form to another website and get a generated email or similar with the results. Some of these are proprietary, others not.

We decided to expand upon the idea of formspree and create a utility which not only turns form submissions into emails, but also allows for storing submissions in json format (to allow us to easily process them afterwards), and allows for customising the mails sent. So we built forms.

The idea of this is easy: on a static website, put a <form> which has as an action to submit the form to Our forms system then processes the request, acts according to the configuration for that particular website, and then redirects the user back to the static website again.

Some of the use cases where this can be employed is:

  • Signup for a newsletter or mailing list
  • Adding people to an open letter
  • Sending templated e-mails to politicians on behalf of others
  • Contact forms of various kinds

Each of these can be made to happen with or without confirmation of an email address. We typically require confirmed e-mail addresses though, and this is the recommended practice. It means the person submitting a form will get an email asking to click a link to confirm their submission. Only when they click that email will any action be taken.

Here's how a form on a webpage could look like:

<form method="POST" action="">  
  <input type="hidden" name="appid" value="totick2">
  Your name: <input type="text" name="name">
  Your e-mail: <input type="email" name="from" />
  Your message: <input type="text" name="msg" />

You will notice that aside from the text and email fields, which the visitor can fill in, there's also a hidden field called appid. This is the identifier used in the forms API to be able to separate different forms, and to know how to behave in each case, and what templates to use.

The configuration, if you want to have a look at it, is in For a simple contact form, it can look like this:

  "contact": {
    "to": [
    "include_vars": true,
    "redirect": "",
    "template": "contact-form"

This does not have any confirmation of the senders email, and simply says that upon submission, an email should be sent to using the template contact-form, including the extra variables (like name, email, etc) which was included in the form. The submitter should then be redirected to where there would supposedly be some thank you note.

The templates are a bit magical, and they are defined using a two step process. First you give the identifier of the template in the applications.json. Then in templates.json in the same directory you define the actual template:

  "contact-form": {
    "plain": {
      "content": "{{ msg }}. Best regards, {{ name }}"
    "required_vars": [
      "msg", "name"

This simply says that we need the content and name variables from the form, and we include them in the content of the email which is sent on submit. You can also specify a html fragment, which would then complement the plain part, and instead of content you can specify filename so the template isn't included in the JSON but loaded from an external filename.

Now, back to our web form. The form we created contained from, name and msg input fields. The latter two were created by us for this particular form, but from is part of a set of form values which control the behaviour of the forms API.

In this case, from is understood by the forms API to be the value of the From: field of the email it is supposed to send. This variable can be set either in the <form> as an input variable, hidden field, or similar, or it can be set in the application config.

If it appears in the application config, this takes precedence and anything submitted in the form with the same name will be ignored. These are the variables which can be included either in the <form> or in the application config:

  • from
  • to
  • reply-to
  • subject
  • content
  • template

Each of them do pretty much what it says; it defines the headers of the email sent, or defines the template or content to be used for the email.

The application config in itself can define a number of additional options, which control how the forms API function. The most frequently used ones are given below (you can see the whole list and some examples in the README.

  • include_vars, we also touched upon, it makes extra variables from the form available to the template if set to true.
  • confirm if set to true means the form must contain an input field called confirm, with a valid email address and that this address should receive a confirmation mail with a link to click before acting on it.
  • redirect is the URL to which to redirect the user after submitting the form.
  • redirect-confirmed is the URL to which to redirect the user after clicking a confirmation link in an email.
  • confirmation-template is the template id for the confirmation mail.
  • confirmation-subject is the subject of the confirmation mail.

Let's look at a more complete example. We can use the form which at one point was used to sign people up for a mailing list about our REUSE initiative.

<form method="post" action="">  
  <input type="hidden" name="appid" value="reuse-signup" />
  <input type="email" name="confirm" size="45" id="from" placeholder="Your email address" /><br />
  <input type="checkbox" name="optin" value="yes"> I agree to the <a href="/privacy">privacy</a> policy.
  <input type="submit" value="Send" /><br />

You can see here we use the trick of naming the email field confirm to use the confirmation feature. Else, we mostly include a checkbox called optin for the user to confirm they've agreed to the privacy policies.

The application config for this is:

  "reuse-signup": {
    "from": "",
    "to": [ "" ],
    "subject": "New signup to REUSE",
    "confirm": true,
    "include_vars": true,
    "redirect": "",
    "redirect-confirmed": "",
    "template": "reuse-signup",
    "store": "/store/reuse/signups.json",
    "confirmation-template": "reuse-signup-confirm",
    "confirmation-subject": "Just one step left! Confirm your email for REUSE!"

What this says is that upon submission, we want to confirm the email address (confirm == TRUE). So when someone submits this form, instead of acting on it right away, the system will redirect the user to a webpage (redirect) and send an email to the address entered into the webform. That email will have the subject Just one step left! Confirm your email for REUSE! (confirmation-subject) and the following content (which is defined in the templates):

Thank you for your interest in our work on making copyrights and licenses computer readable!

There's just one step left: you need to confirm we have the right email for you, so we know where to reach you to stay in touch about our work on this.

Click the link below and we'll be on our way!

  {{ confirmation_url }}

Thank you,

Jonas Öberg  
Executive Director

FSFE e.V. - keeping the power of technology in your hands.  
Your support enables our work, please join us today  

The confirmation_url will be replaced with the URL the submitter has to click.

When the submitter gets this mail and clicks on the link given, they will be redirected to another website redirect-confirmed, and en email will be sent according to the specifications:

The content of this mail will be

{{ confirm }};{{ optin }}

For instance:;yes  

Since it's an email to myself, I didn't bother to make it fancy. But we could easily have written a template such as:

Great news! {{ confirm }} just signed up to hear more from your awesome project!  

And that's it. But if you paid attention, you'll have noticed another defined variable which we didn't explain yet:

    "store": "/store/reuse/signups.json",

This isn't terribly useful unless you have access to the server where the API runs, but essentially, this will make sure that not only does it send an email to me upon confirmation, but it also stores away, in a JSON structure, the contents of that email, and all the forms variables, which we can then use to do something more automated with the information.

If you're interested in the code behind this, it's available in Git, where there are also a bunch of issues for improvement we've thought about. In the long term, it would be nice if:

  • the confirmation and templating options which grew over time were reviewed to make them clearer, right now adding a template often involves messing with three files (the application config, the template config, and the template files itself),
  • it would also be nice if the storage of the signups were accessible without having direct access to the server running it.

But those are bigger tasks, and at least for now, the forms API does what it needs to do. If you want to use it too, the best way would be to clone the Git repository, update the application and template config and send a pull request. We can help you review your configuration and merge it into master.

The 2% discussion - "Free Software" or "Open Source Software"

Matthias Kirschner's Web log - fsfe | 07:34, Monday, 13 November 2017

Scott Peterson from Red Hat this week published an article "Open Source or Free Software". It touches on a very important misunderstanding; people still believe that the terms "Open Source Software" and "Free Software" are referring to different software: they are not! Scott asked several interesting questions in his article and I thought I should share my thoughts about them here and hopefully provoke some more responses on an important topic.

Is it a car?

The problem described in the article is that "Free Software" and "Open Source Software" are associated with certain values.

One of the questions was, would it be useful to have a neutral term to describe this software. Yes I think so; it would be useful. The question which I read between the lines: is it possible to have a neutral term? ("Or is the attempt to separate the associated values a flawed goal?") Here I see a huge challenge, and doubt it is possible. Almost all terms develop a connection with values over time.

In my talks I often use the example of a car. A lot of people say "car", but there are still many other terms, which are used depending on the context, e.g. the brand name. They say let's go to my (Audi, BMW, Mercedes, Peugeot, Porsche, Tesla, Toyota, Volkswagen, ...). This puts the emphasis on the manufacturer. Some people might call it "auto", if you call it "automobile", "vehicle", "vessel", "craft", or "motor-car" you might have different values or at least be perceived in a different way (maybe "old-school" or conservative) than someone calling it always "car". Some goes with "computer on four wheels", which highlights other aspects again, and is also not neutral, as you most likely want to shift the focus on a certain aspect.

Which brings me to Scott's next questions "What if someone wants to refer to this type of software without specifying underlying values?" I doubt it will be possible to find such a term and to keep it neutral over a longer term. Especially if there is already an existing term. So you have to explain people why they should use this new term and it is difficult to do that without associating the new term with an opposite value of the existing term ("so you don't agree that freedom / availability of source code is important?").

As a side note, Scott mentioned FOSS or FLOSS as possible neutral terms. This might work (and for some projects the FSFE also used those terms as some organisations else would not have participated). It might also mean that people who prefer "Free Software" or "Open Source Software" will both be unhappy with you. The problem I see with those combined terms FOSS and FLOSS, is that it deepens the misunderstanding that Open Source Software and Free Software is different software. Why would you else have to combine them? (Would you say "car automobile vehicle"? If you do that would you be seen as more neutral by doing that?)

My main question is: do we really need a neutral term? Why cannot everybody just choose the term which is closer to her values? Whatever term someone else is using, we treat them with respect and without any prejudice. Instead of trying to find something neutral, shouldn't we work on making that happen?

Why is it a problem if someone is using Free Software and the other one is using Open Source Software, if we agree it is the same thing we are talking about? Do we see a problem if one person says car and the other vehicle? (Would be different if people cannot agree if that thing is a BMW or a Volkswagen.)

I would be interested in your thoughts about this.

Beside that, a lot of those discussions happen on an expert level and sometimes assume that other people choose those terms deliberately, too. I want to challenge that assumption. I have met many people who use one of the terms, and after talking with them I realised that they are more in line with those values I would have associated with the other term. That is why I think it is important to keep in mind that you will most likely not know the values of your conversation partner just by them saying "Open Source" or "Free Software"; you need to invest more efforts to understand the other person.

There are also many people in our community who use completely different terms, as they mainly speak in their native language which is not English. They might say Vapaat ohjelmistot, Logiciels Libres, Software Libero, Ελεύθερο Λογισμικό, Fri software Software Libre, Özgür Yazılım, Fri mjukvara, Software-i i Lirë, Свободные программы, 自由暨開源軟, Software Livre, Freie Software, Offene Software, ... Some of them might have a slightly different meaning than the corresponding English translation. What values do people have who use them? And if we assume we would find a neutral English term, would we ever find neutral words for people who do not speak English?

Let's also keep in mind that there are people discussing about underlying principles and values without using any of the terms "Open Source Software", "Free Software", "Libre Software", FOSS, or FLOSS. They rather discuss the principle by saying: we need to make sure the software does not restrict us what we can do with it, or how long. We need to be able to understand what it does, or ask others to do so without asking anyone else for permission. We should be able to give it to our business partners or put it on as many of our servers / products as we want, scale our installations, ... without restrictions. We need to make sure that we or someone else can always make sure to adapt the software to our (new/changing) needs.

They might not once mention any of the terms, although Free Software is the solution for those topics. They might discuss that under labels of digital sovereignty, digital sustainability, vendor neutrality, agility, reliability or other terms. I am sure that if a concept is successful, this will often happen -- and it is not a bad sign if that is happening. So we do not have to see it as a problem if someone else is using another term than we ourselves, especially if they agree with us on most of our goals and values.

Finally, my biggest concern are people who (deliberately or by mistake) say something is Free Software or Open Source Software, but the software is simply not and let us not forget more than 98% of the people around the world who do not know that Free Software -- or however else you call it -- exists or what it exactly means. For me that is the part we have to concentrate our efforts on.

Thanks for reading and I am looking forward to your comments.

PS: On this topic I highly recommend Björn Schießle, 2012, "Free Software, Open Source, FOSS, FLOSS - same same but different" which we heavily use in the FSFE when people have questions about that topic (and thanks to the FSFE's translation team this article is meanwhile available in four languages) and you might also be interested in Simon Phipps, 2017, "Free vs Open".

Saturday, 11 November 2017

Digital preservation of old crap

free software - Bits of Freedom | 13:17, Saturday, 11 November 2017

Digital preservation of old crap

I've collected a lot of crap over the years. Most of it in subdirectories of subdirectories. Of subdirectories of subdirectories. I recently made some useful discoveries /home/jonas/own/_private/Arkiv/Ancient/Arkiv/ancient-archive/Salvage/misc/14. The stash of documents in this place originated in old floppy disks from my youth, which I salvaged at some point, then placed them into an archive directory. Which got placed in another archive directory. Which was ancient, so I placed it in an ancient directory, which was placed in an archive directory.

Over the years, I've made some attempts at sorting this out, and possibly around 7 years ago I even made a tool which would help me tag and index archived material. It didn't last long. But it in itself has a handful of archived documents which I clearly felt was important at the time: notes from the FSFE General Assembly in Manchester in 2006, a Swedish translation of Rangzen Tibetan song and then this:

 Archive ID: 2d8f7304
Description: Kiosk computer image for Tekniska Museet

#  Filename             Filetype                  Tags                          
1            application/zip           work[gu, fsfe]  

This is the image file used for an exhibition at the Swedish National Museum of Science and Technology which I once helped create a Freedom Toaster. I doubt this has any historical value, but I couldn't manage to part with it. And this is how you end up with paths such as ./Arkiv/Ancient/Ancient/Programming/Gopher/GopherHistory/data/raw/

(That directory contains a copy of an 18 year-old Slashdot article talking about how SCO might start offering Linux support. The article was snarfed up, archived and included in my Gopher mirror of Slashdot at the time.)

Either way, back to the point of this posting: I'm looking for recommendations. What I would like to have is a tool which would allow me to organise my archive in some sensible way. I feel a need to be able to add tags (like my previous tool did), but I also feel I need to add more metadata and stories to it.

The entire Gopher project, I would probably wrap into one big file and archive it as a collection. But I would want to add to this some information about what that collection actually contains, when it's from, and how I ended up having it.

Ideally in a way such that parts of the archive which are public, and which could be interesting for others, can easily and automatically be published in an inviting way.

Let me have your thoughts. Do I really need to look at tools such as Omeka or Collective Access or can I wing it, and avoid having to pay for an archivist?

Tuesday, 07 November 2017

Software Archaeology

David Boddie - Updates (Full Articles) | 23:32, Tuesday, 07 November 2017

Just over 21 years ago I took a summer job between university courses. Looking back at it now I find it surprising that I was doing contract work. These days I tend to think that I'm not really cut out for that kind of thing but, when you're young, you tend to think you can do anything. Maybe it's just a case of having enough confidence, even if that can get you into trouble sometimes.

The software itself was called Zig Zag - Ancient Greeks and was written for the Acorn RISC OS platform that, in 1996, was still widely used in schools. Acorn had dominated the education market since the introduction of the BBC Micro in the early 1980s but the perception of the PC, particularly in its Windows incarnation, as an "industry standard" continuously undermined Acorn's position with decision-makers in education. Although Acorn released the RiscPC in 1994 with better-than-ever PC compatibility, it wasn't enough to halt the decline of the platform and, despite a boost from the launch of a StrongARM CPU upgrade in 1996, the original lifespan of the platform ended in 1998.

The history of the platform isn't really very relevant, except that Acorn's relentless focus on the education market, while potentially lucrative for the company, made RISC OS software seem a bit uncool to aspiring students and graduates. Perhaps that might explain why I didn't seem to face much competition when I applied for a summer job writing an educational game.

Back to BASICs

This article isn't about the design of the software, or the process of making it, though maybe I should try and make an effort to dig through the sources a bit more. Indeed, because the game was written in a dialect of BBC BASIC called ARM BASIC which was the standard BASIC on the Archimedes series of computers, and fortunately wasn't obfuscated, it's still possible to look at it today. Today, the idea of writing a multi-component, multi-tasking educational experience in BASIC makes me slightly nervous. However, at that time in my life, I was very comfortable writing non-trivial BASIC programs and, although a project of this scope and complexity wasn't something I'd done before, it just seemed more of a challenge than anything else.

Apart from some time spent in the Logotron office at the beginning and end of the project, most of the work was done from home with floppy disks and documents being sent back and forth between Nicola Bradley, the coordinator, and myself. I would be told what should happen in each activity, implement it, send it back, and get feedback on what needed changing. Despite mostly happening remotely and offline, it all got done fairly quickly. It wasn't really any more efficient when I was in Cambridge working in the office.

Everything you need to make olive oil.

The other people involved in the project were also working remotely, so being in the office didn't mean that I would be working alongside them. I only met the artist, Howard Taylor, when Nicola and I went to discuss some work with him. I didn't meet Peter Oxley, the historian responsible for the themes and accuracy of the software, at all. In some ways, apart from the ongoing discussion with Nicola about each revision of the activities, it was Howard with whom I was working most closely. The graphics he created are very much of their time - for a screen of 640 by 256 pixels with 16 colours - but still charming today.

One of the limitations that we encountered was that the software needed to fit onto two 800K floppy disks. Given that all the artwork had been created and tested for each individual activity, and we couldn't do much about the code to implement the behaviour of the activities, that required some kind of compression. I wanted to use the Squash tool that Acorn supplied with their operating system but this apparently wasn't an option. Perhaps Acorn couldn't sublicense its distribution - it was based around the LZW algorithm which was presumably affected by patents in the UK. We ended up using a tool with a fairly vague, permissive license to compress the images and shipped the corresponding decompression code with the software. I believe that the algorithm used was Lempel-Ziv with Huffman coding, though I would have to disassemble the code to find out because it was only supplied in binary form.

As you can see above, I have a way of viewing these images today. As the author of the software, I had the original images but I wanted to view the ones that had been compressed for the release version of the software. This required the use of the RPCEmu emulator to execute a few system calls to get the original images out of the compressed data. However, once extracted, how can we view images stored in an old, proprietary file format?

Worlds Collide

Fortunately, I prepared the ground for handling images in this format a long time ago. My Spritefile Python module was created many years ago so that I could access images I wanted to keep from my teenage years. I've used it in other projects, too, so that I could view more complex files that used this format to store bitmapped images.

In keeping with my more recent activities, I wanted to see if I could create a application for Android that allows the user to browse the contents of any spritefiles that they might still have. Since I'm stubborn and have my own toolchain for writing applications on Android, this meant writing it in a Python-like language, but that worked to my advantage since the Spritefile module is written in Python. It just meant that I would have to fix it up so that it fit into the constraints imposed by the runtime environment on Android.

Blessed are the cheesemakers.

However, things are never quite that simple, though it has to be said that ensuring that the core algorithms ran on Android was a lot easier than getting the application's GUI to behave nicely, and certainly easier than getting the application to run when the user tries to open a file with the appropriate file extension. Free Software desktops are so much more advanced than Android in this regard, and even old RISC OS has better support for mapping between MIME types and file extensions!

I've put the source code for the Sprite Viewer application up in a Mercurial repository. Maybe I'll create a binary package for it at some point. Maybe someone else will find it useful, or perhaps it will bring back fond memories of 1990s educational computing.

Categories: Python, Android, Free Software

Monday, 06 November 2017

Background for future changes to membership in FSFE e.V.

Repentinus » English | 22:25, Monday, 06 November 2017

At the general assembly in October the Executive Council sought the members’ consent to simplify and streamline the route to membership in FSFE e.V. The members gave it, and as a consequence, the Executive Council will prepare a constitutional amendment to remove the institution of Fellowship Representatives at the next general assembly. If this constitutional amendment is accepted, active volunteers meeting a yet-to-be-decided threshold will be expected to directly apply for membership in the FSFE e.V. The Executive’s reasoning for moving in this direction can be found below.

For the reasons listed below, the Council believes that the institution of Fellowship Representatives has ceased to serve its original purpose (and may indeed have never served its intended purpose). In addition, it has become a tool for arbitrarily excluding active contributors from membership, and has thus become harmful to the future development of the organization. Wherefore, the Council believes that the institution of Fellowship Representatives should be removed and asks for the members’ consent in preparing a constitutional amendment to eliminate the institution and resolve the future status of Fellowship Representatives in office at the time of removal. The proposal would be presented to the General Assembly for adoption at the next ordinary meeting.

The Council believes the following:

1) The Fellowship Representatives were introduced for the purpose of giving FSFE’s sustaining donors (known as the Fellowship) a say in how FSFE is operated. This is almost unprecedented in the world of nonprofits, and our community would have been justly outraged if we had introduced similar representation for corporate donors.

2) The elections have identified a number of useful additions to the GA. Most of them can be described as active volunteers with FSFE before their election. The Council believes that by identifying and encouraging active contributors to become GA members and better documenting the procedure of becoming a member, the FSFE would have attracted the same people.

3) We should either agree on including volunteers whose contribution exceeds a certain threshold (core team membership? local/topical team coordinatorship? active local/topical team contributor for a year? – the threshold is entirely up for debate) as members or we should decline to extend membership on the basis of volunteering. It is simply wrong to pit volunteers against each other in a contest where a mixture of other volunteers and a miniscule fraction of solely financial contributors decide which of our volunteers are most deserving of membership. This unfortunate mechanism has excluded at least one current GA member from membership for several years, and it has been used to discourage a few coordinators from applying for membership in the past.

4) Reaching consensus on removing the Fellowship seats is always going to be difficult because we will keep electing new Fellowship Representatives who will understandably be hostile to the idea of eliminating the post. The current members who have been able to observe past and current Fellowship Representatives and their involvement in our activities need to decide if the institution serves a useful role or not, and hence whether to remove it or not. The Council believes it does not, and will prepare a constitutional amendment for GA2018 if the majority of the members feel likewise.

Sunday, 05 November 2017

In Defence of Mail

Paul Boddie's Free Software-related blog » English | 23:21, Sunday, 05 November 2017

A recent article, “The trouble with text-only email“, gives us an insight through an initially-narrow perspective into a broader problem: how the use of e-mail by organisations and its handling as it traverses the Internet can undermine the viability of the medium. And how organisations supposedly defending the Internet as a platform can easily find themselves abandoning technologies that do not sit well with their “core mission”, not to mention betraying that mission by employing dubious technological workarounds.

To summarise, the Mozilla organisation wants its community to correspond via mailing lists but, being the origin of the mails propagated to list recipients when someone communicates with one of their mailing lists, it finds itself under the threat of being blacklisted as a spammer. This might sound counterintuitive: surely everyone on such lists signed up for mails originating from Mozilla in order to be on the list.

Unfortunately, the elevation of Mozilla to being a potential spammer says more about the stack of workaround upon workaround, second- and third-guessing, and the “secret handshakes” that define the handling of e-mail today than it does about anything else. Not that factions in the Mozilla organisation have necessarily covered themselves in glory in exploring ways of dealing with their current problem.

The Elimination Problem

Let us first identify the immediate problem here. No, it is not spamming as such, but it is the existence of dubious “reputation” services who cause mail to be blocked on opaque and undemocratic grounds. I encountered one of these a few years ago when trying to send a mail to a competition and finding that such a service had decided that my mail hosting provider’s Internet address was somehow “bad”.

What can one do when placed in such a situation? Appealing to the blacklisting service will not do an individual any good. Instead, one has to ask one’s mail provider to try and fix the issue, which in my case they had actually been trying to do for some time. My mail never got through in the end. Who knows how long it took to persuade the blacklisting service to rectify what might have been a mistake?

Yes, we all know that the Internet is awash with spam. And yes, mechanisms need to be in place to deal with it. But such mechanisms need to be transparent and accountable. Without these things, all sorts of bad things can take place: censorship, harassment, and forms of economic crime spring readily to mind. It should be a general rule of thumb in society that when someone exercises power over others, such power must be controlled through transparency (so that it is not arbitrary and so that everyone knows what the rules are) and through accountability (so that decisions can be explained and judged to have been properly taken and acted upon).

We actually need better ways of eliminating spam and other misuse of common communications mechanisms. But for now we should at least insist that whatever flawed mechanisms that exist today uphold the democratic principles described above.

The Marketing Problem

Although Mozilla may have distribution lists for marketing purposes, its problem with mailing lists is something of a different creature. The latter are intended to be collaborative and involve multiple senders of the original messages: a many-to-many communications medium. Meanwhile, the former is all about one-to-many messaging, and in this regard we stumble across the root of the spam problem.

Obviously, compulsive spammers are people who harvest mail addresses from wherever they can be found, trawling public data or buying up lists of addresses sourced during potentially unethical activities. Such spammers create a huge burden on society’s common infrastructure, but they are hardly the only ones cultivating that burden. Reputable businesses, even when following the law communicating with their own customers, often employ what can be regarded as a “clueless” use of mail as a marketing channel without any thought to the consequences.

Businesses might want to remind you of their products and encourage you to receive their mails. The next thing you know, you get messages three times a week telling you about products that are barely of interest to you. This may be a “win” for the marketing department – it is like advertising on television but cheaper because you don’t have to bid against addiction-exploiting money launderers gambling companies, debt sharks consumer credit companies or environment-trashing, cure peddlers nutritional supplement companies for “eyeballs” – but it cheapens and worsens the medium for everybody who uses it for genuine interpersonal communication and not just for viewing advertisements.

People view e-mail and mail software as a lost cause in the face of wave after wave of illegal spam and opportunistic “spammy” marketing. “Why bother with it at all?” they might ask, asserting that it is just a wastebin that one needs to empty once a week as some kind of chore, before returning to one’s favourite “social” tools (also plagued with spam and surveillance, but consistency is not exactly everybody’s strong suit).

The Authenticity Problem

Perhaps to escape problems with the overly-zealous blacklisting services, it is not unusual to get messages ostensibly from a company, being a customer of theirs, but where the message originates from some kind of marketing communications service. The use of such a service may be excusable depending on how much information is shared, what kinds of safeguards are in place, and so on. What is less excusable is the way the communication is performed.

I actually experience this with financial institutions, which should be a significant area of concern both for individuals, the industry and its regulators. First of all, the messages are not encrypted, which is what one might expect given that the sender would need some kind of public key information that I haven’t provided. But provided that the message details are not sensitive (although sometimes they have been, which is another story), we might not set our expectations so high for these communications.

However, of more substantial concern is the way that when receiving such mails, we have no way of verifying that they really originated from the company they claim to have come from. And when the mail inevitably contains links to things, we might be suspicious about where those links, even if they are URLs in plain text messages, might want to lead us.

The recipient is now confronted with a collection of Internet domain names that may or may not correspond to the identities of reputable organisations, some of which they might know as a customer, others they might be aware of, but where the recipient must also exercise the correct judgement about the relationship between the companies they do use and these other organisations with which they have no relationship. Even with a great deal of peripheral knowledge, the recipient needs to exercise caution that they do not go off to random places on the Internet and start filling out their details on the say-so of some message or other.

Indeed, I have a recent example of this. One financial institution I use wants me to take a survey conducted by a company I actually have heard of in that line of business. So far, so plausible. But then, the site being used to solicit responses is one I have no prior knowledge of: it could be a reputable technology business or it could be some kind of “honeypot”; that one of the domains mentioned contains “cloud” also does not instil confidence in the management of the data. To top it all, the mail is not cryptographically signed and so I would have to make a judgement on its authenticity based on some kind of “tea-leaf-reading” activity using the message headers or assume that the institution is likely to want to ask my opinion about something.

The Identity Problem

With the possibly-authentic financial institution survey message situation, we can perhaps put our finger on the malaise in the use of mail by companies wanting our business. I already have a heavily-regulated relationship with the company concerned. They seemingly emphasise issues like security when I present myself to their Web sites. Why can they not at least identify themselves correctly when communicating with me?

Some banks only want electronic communications to take place within their hopefully-secure Web site mechanisms, offering “secure messaging” and similar things. Others also offer such things, either two-way or maybe only customer-to-company messaging, but then spew e-mails at customers anyway, perhaps under the direction of the sales and marketing branches of the organisation.

But if they really must send mails, why can they not leverage their “secure” assets to allow me to obtain identifying information about them, so that their mails can be cryptographically signed and so that I can install a certificate and verify their authenticity? After all, if you cannot trust a bank to do these things, which other common institutions can you trust? Such things have to start somewhere, and what better place to start than in the banking industry? These people are supposed to be good at keeping things under lock and key.

The Responsibility Problem

This actually returns us to the role of Mozilla. Being a major provider of software for accessing the Internet, the organisation maintains a definitive list of trusted parties through whom the identity of Web sites can be guaranteed (to various degrees) when one visits them with a browser. Mozilla’s own sites employ certificates so that people browsing them can have their privacy upheld, so it should hardly be inconceivable for the sources of Mozilla’s mail-based communications to do something similar.

Maybe S/MIME would be the easiest technology to adopt given the similarities between its use of certificates and certificate authorities and the way such things are managed for Web sites. Certainly, there are challenges with message signing and things like mailing lists, this being a recurring project for GNU Mailman if I remember correctly (and was paying enough attention), but nothing solves a longstanding but largely underprioritised problem than a concrete need and the will to get things done. Mozilla has certainly tried to do identity management in the past, recalling initiatives like Mozilla Persona, and the organisation is surely reasonably competent in that domain.

In the referenced article, Mozilla was described as facing an awkward technical problem: their messages were perceived as being delivered indiscriminately to an audience of which large portions may not have been receiving or taking receipt of the messages. This perception of indiscriminate, spam-like activity being some kind of metric employed by blacklisting services. The proposed remedy for potential blacklisting involved the elimination of plain text e-mail from Mozilla’s repertoire and the deployment of HTML-only mail, with the latter employing links to images that would load upon the recipient opening the message. (Never mind that many mail programs prevent this.)

The rationale for this approach was that Mozilla would then know that people were getting the mail and that by pruning away those who didn’t reveal their receipt of the message, the organisation could then be more certain of not sending mail to large numbers of “inactive” recipients, thus placating the blacklisting services. Now, let us consider principle #4 of the Mozilla manifesto:

Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.

Given such a principle, why then is the focus on tracking users and violating their privacy, not on deploying a proper solution and just sending properly-signed mail? Is it because the mail is supposedly not part of the Web or something?

The Proprietary Service Problem

Mozilla can be regarded as having a Web-first organisational mentality which, given its origins, should not be too surprising. Although the Netscape browser was extended to include mail facilities and thus Navigator became Communicator, and although the original Mozilla browser attempted to preserve a range of capabilities not directly related to hypertext browsing, Firefox became the organisation’s focus and peripheral products such as Thunderbird have long struggled for their place in the organisation’s portfolio.

One might think that the decision-makers at Mozilla believe that mundane things like mail should be done through a Web site as webmail and that everyone might as well use an established big provider for their webmail needs. After all, the vision of the Web as a platform in its own right, once formulated as Netscape Constellation in more innocent times, can be used to justify pushing everything onto the Web.

The problem here is that as soon as almost everyone has been herded into proprietary service “holding pens”, expecting a free mail service while having their private communications mined for potential commercial value, things like standards compliance and interoperability suffer. Big webmail providers don’t need to care about small mail providers. Too bad if the big provider blacklists the smaller one: most people won’t even notice, and why don’t the users of the smaller provider “get with it” and use what everybody else is using, anyway?

If everyone ends up almost on the same server or cluster of servers or on one of a handful of such clusters, why should the big providers bother to do anything by the book any more? They can make all sorts of claims about it being more efficient to do things their own way. And then, mail is no longer a decentralised, democratic tool any more: its users end up being trapped in a potentially exploitative environment with their access to communications at risk of being taken away at a moment’s notice, should the provider be persuaded that some kind of wrong has been committed.

The Empowerment Problem

Ideally, everyone would be able to assert their own identity and be able to verify the identity of those with whom they communicate. With this comes the challenge in empowering users to manage their own identities in a way which is resistant to “identity theft”, impersonation, and accidental loss of credentials that could have a severe impact on a person’s interactions with necessary services and thus on their life in general.

Here, we see the failure of banks and other established, trusted organisations to make this happen. One might argue that certain interests, political and commercial, do not want individuals controlling their own identity or their own use of cryptographic technologies. Even when such technologies have been deployed so that people can be regarded as having signed for something, it usually happens via a normal secured Web connection with a button on a Web form, everything happening at arm’s length. Such signatures may not even be any kind of personal signature at all: they may just be some kind of transaction surrounded by assumptions that it really was “that person” because they logged in with their credentials and there are logs to “prove” it.

Leaving the safeguarding of cryptographic information to the average Internet user seems like a scary thing to do. People’s computers are not particularly secure thanks to the general neglect of security by the technology industry, nor are they particularly usable or understandable, especially when things that must be done right – like cryptography – are concerned. It also doesn’t help that when trying to figure out best practices for key management, it almost seems like every expert has their own advice, leaving the impression of a cacophony of voices, even for people with a particular interest in the topic and an above-average comprehension of the issues.

Most individuals in society might well struggle if left to figure out a technical solution all by themselves. But institutions exist that are capable of operating infrastructure with a certain level of robustness and resilience. And those institutions seem quite happy with the credentials I provide to identify myself with them, some of which being provided by bits of hardware they have issued to me.

So, it seems to me that maybe they could lead individuals towards some kind of solution whereupon such institutions could vouch for a person’s digital identity, provide that person with tools (possibly hardware) to manage it, and could help that person restore their identity in cases of loss or theft. This kind of thing is probably happening already, given that smartcard solutions have been around for a while and can be a component in such solutions, but here the difference would be that each of us would want help to manage our own identity, not merely retain and present a bank-issued identity for the benefit of the bank’s own activities.

The Real Problem

The article ends with a remark mentioning that “the email system is broken”. Given how much people complain about it, yet the mail still keeps getting through, it appears that the brokenness is not in the system as such but in the way it has been misused and undermined by those with the power to do something about it.

That the metric of being able to get “pull requests through to Linus Torvalds’s Gmail account” is mentioned as some kind of evidence perhaps shows that people’s conceptions of e-mail are themselves broken. One is left with an impression that electronic mail is like various other common resources that are systematically and deliberately neglected by vested interests so that they may eventually fail, leaving those vested interests to blatantly profit from the resulting situation while making remarks about the supposed weaknesses of those things they have wilfully destroyed.

Still, this is a topic that cannot be ignored forever, at least if we are to preserve things like genuinely open and democratic channels of communication whose functioning may depend on decent guarantees of people’s identities. Without a proper identity or trust infrastructure, we risk delegating every aspect of our online lives to unaccountable and potentially hostile entities. If it all ends up with everyone having to do their banking inside their Facebook account, it would be well for the likes of Mozilla to remember that at such a point there is no consolation to be had any more that at least everything is being done in a Web browser.

Friday, 03 November 2017

EU Ministers call for more Free Software in governmental infrastructure

polina's blog | 15:46, Friday, 03 November 2017

On 6 October, 32 European Ministers in charge of eGovernment policy signed Tallinn Declaration on eGovernment that calls for more collaboration, interoperable solutions and sharing of good practices throughout public administrations and across borders. Amongst many other things, the EU ministers recognised the need to make more use of Free Software solutions and Open Standards when (re)building governmental digital systems.

Tallinn Declaration, lead by the Estonian presidency in the EU, has been adopted on 6 October 2017. It is a ministerial declaration that marks a new political commitment at EU and EFTA (European Free Trade Area) level on priorities to ensure user-centric digital public services for both citizens and businesses cross-border. While having no legislative power, ministerial declaration marks a political commitment to ensure the digital transformation of public administrations through a set of commonly agreed principles and actions.

The FSFE has previously submitted its input for the aforementioned declaration during the public consultation round, asking for greater inclusion of Free Software in delivering truly inclusive, trustworthy and interoperable digital services to all citizens and businesses across the EU.

The adopted Tallinn Declaration proves to be a forward-looking document that acknowledges the importance of Free Software in order to ensure the principle of ‘interoperability by default’, and expresses the will of all signed EU countries to:

make more use of open source solutions and/or open standards when (re)building ICT systems and solutions (among else, to avoid vendor lock-ins)[...]

Additionally, the signatories call upon the European Commission to:

consider strengthening the requirements for use of open source solutions and standards when (re)building of ICT systems and solutions takes place with EU funding, including by an appropriate open licence policy – by 2020

The last point is especially noteworthy, as it explicitly calls for the European Commission to make use of Free Software and Open Standards in building their ICT infrastructure with EU funds, making the point in line with our “Public Money, Public Code” campaign that is targeted at the demand for all publicly financed software developed for the public sector to be publicly made available under Free Software licence.

What’s next?

Tallinn Declaration sets several deadlines for its implementation in the next few years: with the annual presentation on the progress of implementation of the declaration in the respective countries across the EU and EFTA through the eGovernment Action Plan Steering Board. The signatories also called upon the Austrian Presidency of the Council of the EU to take stock of the implementation of Tallinn Declaration in autumn 2018.

While reinstating the fact that ministerial declaration has no legislative power inflicted on the signed countries, it nevertheless expresses the political will of the EU and EFTA countries to digitise their governments in the most user-friendly and efficient way. The fact that it explicitly recognises the role of Free Software and Open Standards for a trustworthy, transparent and open eGovernment on a high level, along with a demand for strengthened reuse of ICT solutions based on Free Software in the EU public sector, is a valuable step forward establishing a real “Public Money, Public Code” reality across Europe.

Hence, it is always worthy to have a ‘good’ declaration, than no declaration at all. Now it all depends on a proper implementation.

Thursday, 02 November 2017

Get ready for NoFlo 1.0

Henri Bergius | 00:00, Thursday, 02 November 2017

After six years of work, and bunch of different projects done with NoFlo, we’re finally ready for the big 1.0. The two primary pull requests for the 1.0.0 cycle landed today, and so it is time to talk about how to prepare for it.

tl;dr If your project runs with NoFlo 0.8 without deprecation warnings, you should be ready for NoFlo 1.0

ES6 first

The primary difference between NoFlo 0.8 and 1.0 is that now we’re shipping it as ES6 code utilizing features like classes and arrow functions.

Now that all modern browsers support ES6 out of the box, and Node.js 8 is the long-term supported release, it should be generally safe to use ES6 as-is.

If you need to support older browsers, Node.js versions, or maybe PhantomJS, it is of course possible to compile the NoFlo codebase into ES5 using Babel.

We recommend new components to be written in ES6 instead of CoffeeScript.

Easier webpack builds

It has been possible to build NoFlo projects for browsers since 2013. Last year we switched to webpack as the module bundler.

However, at that stage there was still quite a lot of configuration magic happening inside grunt-noflo-browser. This turned out to be sub-optimal since it made integrating NoFlo into existing project build setups difficult.

Last week we extracted the difficult parts out of the Grunt plugin, and released the noflo-component-loader webpack loader. With this, you can generate a configured NoFlo component loader in any webpack build. See this example.

In addition to generating the component loader, your NoFlo browser project may also need two other loaders, depending how your NoFlo graphs are built: json-loader for JSON graphs, and fbp-loader for graphs defined in the .fbp DSL.

Removed APIs

There were several old NoFlo APIs that we marked as deprecated in NoFlo 0.8. In that series, usage of those APIs logged warnings. Now in 1.0 the deprecated APIs are completely removed, giving us a lighter, smaller codebase to maintain.

Here is a list of the primary API removals and the suggested migration strategy:

  • noflo.AsyncComponent class: use WirePattern or Process API instead
  • noflo.ArrayPort class: use InPort/OutPort with addressable: true instead
  • noflo.Port class: use InPort/OutPort instead
  • noflo.helpers.MapComponent function: use WirePattern or Process API instead
  • noflo.helpers.WirePattern legacy mode: now WirePattern always uses Process API internally
  • noflo.helpers.WirePattern synchronous mode: use async: true and callback
  • noflo.helpers.MultiError function: send errors via callback or error port
  • noflo.InPort process callback: use Process API
  • noflo.InPort handle callback: use Process API
  • noflo.InPort receive method: use Process API getX methods
  • noflo.InPort contains method: use Process API hasX methods
  • Subgraph EXPORTS mechanism: disambiguate with INPORT/OUTPORT

The easiest way to verify whether your project is compatible is to run it with NoFlo 0.8.

You can also make usage of deprecated APIs throw errors instead of just logging them by setting the NOFLO_FATAL_DEPRECATED environment variable. In browser applications you can set the same flag to window.


Scopes are a flow isolation mechanism that was introduced in NoFlo 0.8. With scopes, you can run multiple simultaneous flows through a NoFlo network without a risk of data leaking from one scope to another.

The primary use case for scope isolation is building things like web API servers, where you want to isolate the processing of each HTTP request from each other safely, while reusing a single NoFlo graph.

Scope isolation is handled automatically for you when using Process API or WirePattern. If you want to manipulate scopes, the noflo-packets library provides components for this.

NoFlo in/outports can also be set as scoped: false to support getting out of scopes.

asCallback and async/await

noflo.asCallback provides an easy way to expose NoFlo graphs to normal JavaScript consumers. The produced function uses the standard Node.js callback mechanism, meaning that you can easily make it return promises with Node.js util.promisify or Bluebird. After this your NoFlo graph can be run via normal async/await.

Component libraries

There are hundreds of ready-made NoFlo components available on NPM. By now, most of these have been adapted to work with NoFlo 0.8.

Once 1.0 ships, we’ll try to be as quick as possible to update all of them to run with it. In the meanwhile, it is possible to use npm shrinkwrap to force them to depend on NoFlo 1.0.

If you’re relying on a library that uses deprecated APIs, or hasn’t otherwise been updated yet, please file an issue in the GitHub repo of that library.

This pull request for noflo-gravatar is a great example of how to implement all the modernization recommendations below in an existing component library.

Recommendations for new projects

This post has mostly covered how to adapt existing NoFlo projects for 1.0. How about new projects? Here are some recommendations:

  • While NoFlo projects have traditionally been written in CoffeeScript, for new projects we recommend using ES6. In particular, follow the AirBnB ES6 guidelines
  • Use fbp-spec for test automation
  • Use NPM scripts instead of Grunt for building and testing
  • Make browser builds with webpack utilizing noflo-component-loader
  • Use Process API when writing components
  • If you expose any library functionality, provide an index file using noflo.asCallback for non-NoFlo consumers

The BIG IoT Node.js bridge is a recent project that follows these guidelines if you want to see an example in action.

There is also a project tutorial available on the NoFlo website.

Wednesday, 01 November 2017

Akademy 2018 site visit

TSDgeos' blog | 22:16, Wednesday, 01 November 2017

Last week I was part of the expedition by KDE (together with Kenny and Petra) to visit the local team that is helping us organize Akademy 2018 in Vienna.

I can say that I'm very happy :)

The accommodation that we're probably going to recommend is 15 minutes away by train from the airport, 20 minutes on foot from the Venue (10 on metro) so kind of convenient.

The Venue is modern and centrally located in Vienna (no more "if you miss the bus you're stranded")

Vienna itself seems to be also a nice city for late evening sight-seeing (or beers :D)

Hopefully "soon" we will have confirmation of the dates and we'll be able to publish them :)

Adoption of Free Software PDF readers in Italian Regional Public Administrations (fifth monitoring)

Tarin Gamberini | 08:56, Wednesday, 01 November 2017

The following monitoring shows that, in the last semester, eight Italian Regions have reduced advertisement of proprietary PDF readers on their website, and that a Region has increased its support for Free Software PDF readers.

Continue reading →

Monday, 30 October 2017

Technoshamanism and Free Digital Territories in Benevento, Italy

agger's Free Software blog | 09:30, Monday, 30 October 2017

From October 23 to 29, an international seminar about technoshamanism and the concept of “Digital Land” or “free digital territories” was held in the autonomous ecological project Terra Terra near Reino, Benevento, Italy. The event was organized by Vincenzo Tozzi announced on the Bricolabs Mailing List. The seminar was held in the form of a “Pajelança Quilombólica Digital”, as it’s called in Brazilian Portuguese, a “digital shamanism” brainstorming on the possibilities of using free digital territories to connect free real-world territories.

Vincenzo Tozzi is the founder of the Baobáxia project which is run by the Brazilian network Rede Mocambos, and the main point of departure was the future of Baobáxia – sustainability, development, paths for the future.

Arriving in Napoli, I had the pleasure of meeting Hellekin and Natacha Roussel from Brussels who had received the call through the Bricolabs list. Vince had arranged that we could stay in the Mensa Occupata, a community-run squat in central Napoli that was occupied in 2012 and is the home of a hackerspace, a communal kitchen and a martial arts gym, the “Palestra Populare Vincenzo Leone”, where I was pleased to see that my own favourite capoeira is among the activities.

The actual seminar took place in much more rural settings outside Reino, in the country known locally as “terra delle streghe” or “land of the witches”. With respect to our desire to work with free territories and the inherent project of recuperating and learning from ancestral traditions, the area is interesting in the sense that the land is currently inhabited by the last generation of farmers to cultivate the land with traditional methods supported by an oral tradition which har mostly been lost in the most recent decades.  During the seminar, we had the opportunity to meet up with people from the local cooperative Lentamente, which is working to preserve and recuperate the traditional ways of growing crops and keeping animals without machines (hence the name “lentamente”, slowly) as well as trying to preserve as much as possible of the existing oral traditions.

During the seminar, we accomodated to the spirit of the territory and the settings by dividing the day into two parts: In the morning, we would go outside and work on the land until lunchtime, which would be around three o’clock.  After dinner, we’d dedicate the evenings to more general  discussions as well as to relaxing, often still covering important ground.

After lunch, hopefully properly wake and inspired by the fresh air and the beauty of the countryside, we would start looking at the technical side of things, delve into the code, discuss protocols and standards and explore possible pathways to the future. Among other things, we built some stairs and raised beds on a hillside leading up to the main buildings and picked olives for about twenty litres of oil.

As for the technical side of the encounter, we discussed the structure of the code, the backend  repositories and the offline synchronization process with newcomers to the project, reviewed various proposals for the technical future of the project and installed two new mucuas. In the process, we identified some bugs and fixed a couple of them.

An important aspect  of the concept of “free digital territories” is that we are looking for and using new metaphors for software development. Middle-class or otherwise well-off people who are used to have the means to employ servants or hire e.g. a lawyer whenever they need one may find it easy to conceive of a computer as a “server”  whose life is dedicated to serving its “clients”. For armies of office workers, having a computer pre-equipped with a “desktop” absolutely makes sense. But in the technoshamanism and quilombolic networks we’re not concerned with perpetuating the values and structures of capitalist society. We wish to provide software for free territories, and thus our metaphors are not based on the notion of clients and servers, but of digital land: A mucúa or node of the Baobáxia system is not a “server”, it’s digital land for the free networks to grow and share their culture.

Another important result was that the current offline synchronization and storage using git and git-annex can be generalized to other applications. Baobáxia currently uses a data format whose backend representation and synchronization is fixed by convention, but we could build other applications using the same protocol, a protocol for “eventually connected federated networks“. Other examples of applications that could use this technology for offline or eventually connected communications is wikis, blogs and calendars. One proposal is therefore to create an RFC for this communication, basically documenting and generalizing Baobáxia’s current protocol. The protocol, which at present includes the offline propagation of requests for material not present on a local node of the system, could also be generalized to allow arbitrary messages and commands, e.g. requesting the performance of a service known to be running in another community, to be stored offline and performed when the connection actually happens. This RFC (or RFCs) should be supplemented by proof-of-concept applications which should not be very difficult to write.

This blog post is a quick summary of my personal impressions, and I think there are many more stories to be told about the threads we tried to connect those days in Benevento. All in all, the encounter was very fruitful and I was happy to meet new people and use these days to concentrate of the future of Baobáxia and related projects for free digital territories.

Sunday, 29 October 2017

Open Source Summit - Day 3

Inductive Bias | 08:35, Sunday, 29 October 2017

Open source summit Wednesday started with a keynote by members of the Banks family telling a packed room on how they approached raising a tech family. The first hurdle that Keila (the teenage daughter of the family) talked about was something I personally had never actually thought about: Communication tools like Slack that are in widespread use come with an age restriction excluding minors. So by trying to communicate with open source projects means entering illegality.

A bit more obivious was their advise to help raise kids' engagement with tech: Try to find topics that they can relate to. What works fairly often are reverse engineering projects that explain how things actually work.

The Banks are working with a goal based model where children get ten goals to pursue during the year with regular quarterly reviews. An intersting twist though: Eight of these ten goals are choosen by the children themselves, two are reserved for parents to help with guidance. As obvious as this may seem, having clear goals and being able to influence them yourselves is something that I believe is applicable in the wider context of open source contributor and project mentoring as well as employee engagement.

The speakers also talked about embracing children's fear. Keila told the story of how she was afraid to talk in front of adult audiences - in particular at the keynote level. The advise that her father gave that did help her: You can trip on the stage, you can fall, all of that doesn't matter for as long as you can laugh at yourself. Also remember that every project is not the perfect project - there's always something you can improve - and that's ok. This is fairly in line with the feedback given a day earlier during the Linux Kernel Panel where people mentioned how today they would never accept the first patch they themselves had once written: Be persistant, learn from the feedback you get and seek feedback early.

Last but not least, the speakers advised to not compare your family to anyone, not even to yourself. Everyone arrives at tech via a different route. It can be hard to get people from being averse to tech to embrace it - start with a tiny little bit of motivation, from there on rely on self motivation.

The family's current project turned business is to support L.A. schools to support children get a handle on tech.

The TAO of Hashicorp

In the second keynote Hashimoto gave an overview of the Tao of Hashicorp - essentially the values and principles the company is built on. What I found interesting about the talk was the fact that these values were written down very early in the process of building up Hashicorp when the company didn't have much more than five employees, comprised vision, roadmap and product design pieces and has been applied to every day decisions ever since.

The principles themselves cover the following points:
  • Workflows - not technologies. Essentially describing a UX first approach where tools are being mocked and used first before diving deeper into the architecture and coding. This goes as far as building a bash script as a mockup for a command line interface to see if it works well before diving into coding.
  • Simple, modular and Comosable. Meaning that tools built should have one clear purpose instead of piling features on top of each other for one product.
  • Communicating sequential processes. Meaning to have standalone tools with clear APIs.
  • Immutability.
  • Versioning through Codification. When having a question, the answer "just talk to X" doesn't scale as companies grow. There are several fixes to this problem. The one that Hashicorp decided to go for was to write knowledge down in code - instead of having a detailing how startup works, have something people can execute.
  • Automate.
  • Resilient systems. Meaning to strive for systems that know their desired state and have means to go back to it.
  • Pragmatism. Meaning that the principles above shouldn't be applied blindly but adjusted to the problem at hand.

While the content itself differs I find it interesting that Hashicorp decided to communicate in terms of their principles and values. This kind of setup reminds me quite a bit about the way Amazon Leadership principles are being applied and used inside of Amazon.

Integrating OSS in industrial environments - by Siemens

The third keynote was given by Siemens, a 170 year old, 350k employees rich German corporation focussed on industrial appliances.

In their current projects they are using OSS in embedded projects related to power generation, rail automation (Debian), vehicle control, building automation (Yocto), medical imaging (xenomai on big machines).

Their reason for tapping into OSS more and more is to grow beyond their own capabilities.

A challenge in their applications relates to long term stability, meaning supporiting an appliance for 50 years and longer. Running there appliances unmodified for years today is not feasible anymore due to policies and corporate standards that requrire updates in the field.

Trouble they are dealing with today is in the cost of software forks - both, self inflicted and supplier caused forks. The amount of cost attached to these is one of the reasons for Siemens to think upstream-first, both internally as well as when choosing suppliers.

Another reason for this approach is to be found in trying to become part of the community for three reasons: Keeping talent. Learning best practices from upstream instead of failing one-self. Better communication with suppliers through official open source channels.

One project Siemens is involved with at the moment is the so-called Civil Infrastructure Platform project.

Another huge topic within Siemens is software license compliance. Being a huge corporation they rely on Fossology for compliance checking.

Linus Torvalds Q&A

The last keynote of the day was an on stage interview with Linus Torvalds. The introduction to this kind of format was lovely: There's one thing Linus doesn't like: Being on stage and giving a pre-created talk. Giving his keynote in the form of an interview with questions not shared prior to the actual event meant that the interviewer would have to prep the actual content. :)

The first question asked was fairly technical: Are RCs slowing down? The reason that Linus gave had a lot to do with proper release management. Typically the kernel is released on a time-based schedule, with one release every 2.5 months. So if some feature doesn't make it into a release it can easily be integrated into the following one. What's different with the current release is Greg Kroah Hartman having announced it would be a long term support release, so suddenly devs are trying to get more features into it.

The second question related to a lack of new maintainers joining the community. The reasons Linus sees for this are mainly related to the fact that being a maintainer today is still fairly painful as a job: You need experience to quickly judge patches so the flow doesn't get overwhelming. On the other hand you need to have shown to the community that you are around 24/7, 365 days a year. What he wanted the audience to know is that despite occasional harsh words he loves maintainers, the project does want more maintainers. What's important to him isn't perfection - but having people that will stand up to their mistakes.

One fix to the heavy load mentioned earlier (which was also discussed during the kernel maintainers' panel a day earlier) revolved around the idea of having a group of maintainers responsible for any single sub-system in order to avoid volunteer burnout, allow for vacations to happen, share the load and ease hand-over.

Asked about kernel testing Linus admitted to having been sceptical about the subject years ago. He's a really big fan of random testing/ fuzzing in order to find bugs in code paths that are rarely if ever tested by developers.

Asked about what makes a successful project his take was the ability to find commonalities that many potential contributors share, the ability to find agreement, which seems easier for systems with less user visibility. An observation that reminded my of the bikeshedding discussions.

Also he mentioned that the problem you are trying to solve needs to be big enough to draw a large enough crowd. When it comes to measuring success though his insight was very valuable: Instead of focussing too much on outreach or growth, focus on deciding whether your project solves a problem you yourself have.

Asked about what makes a good software developer, Linus mentioned that the community over time has become much less homogenuous compared to when he started out in his white, male, geeky, beer-loving circles. The things he believes are important for developers are caring about what they do, being able to invest in their skills for a long enough period to develop perfection (much like athletes train a long time to become really sucessful). Also having fun goes a long way (though in his eyes this is no different when trying to identify a successful marketing person).

While Linus isn't particularly comfortable interacting with people face-to-face, e-mail for him is different. He does have side projects beside the kernel. Mainly for the reason of being able to deal with small problems, actually provide support to end-users, do bug triage. In Linux kernel land he can no longer do this - if things bubble up to his inbox, they are bound to be of the complex type, everything else likely was handled by maintainers already.

His reason for still being part of the Linux Kernel community: He likes the people, likes the technology, loves working on stuff that is meaningful, that people actually care about. On vacation he tends to check his mail three times a day to not loose track and be overwhelmed when he gets back to work. There are times when he goes offline entirely - however typically after one week he longing to be back.

Asked about what further plans he has, he mentioned that for the most part he doesn't plan ahead of time, spending most of his life reacting and being comfortable with this state of things.

Speaking of plans: It was mentioned that likely Linux 5.0 is to be released some time in summer 2018 - numbers here don't mean anything anyway.

Nobody puts Java in a container

Jörg Schad from Mesosphere gave an introduction to how container technolgies like Docker really work and how that applies to software run in the JVM.

He started off by explaining the advantages of containers: Isolating what's running inside, supplying standard interfaces to deployed units, sort of the write once, run anywhere promise.

Compared to real VMs they are more light weight, however with the caveat of using the host kernel - meaning that crashing the kernel means crashing all container instances running on that host as well. In turn they are faster to spin up, need less memory and less storage.

So which properties do we need to look at when talking about having a JVM in a container? Resource restrictions (CPU, memory, device visibility, blkio etc.) are being controlled by cgroups. Process spaces for e.g. pid, net, ipc, mnt, users and hostnames are being controlled through libcontainer namespaces.

Looking at cgroups there are two aspects that are very obviously interesting for JVM deployments: For memory settings one can set hard and soft limits. However much in contrast to the JVM there is no such thing as an OOM being thrown when resources are exhausted. For CPUs available there are two ways to configure limits: cpushares lets you give processes a relative priority weighting. Cpusets lets you pin groups to specific cpus.

General advise is to avoid cupsets as it removes one level of freedom from scheduling, often leads to less efficiency. However it's a good tool to avoid cup-bouncing, and to maximise cache usage.

When trying to figure out the caveats of running JVMs in containers one needs to understand what the memory requirements for JVMs are: In addition to the well known, configurable heap memory, each JVM needs a bit of native JRE memory, perm get/ meta space, JIT bytecode space, JNO and NIO space as well as additional native space for threads. With permgen space turned native meta space that means that class loader leaks are capable of maxing out the memory of the entire machine - one good reason to lock JVMs in containers.

The caveats of putting JVMs into containers are related to JRE intialisation defaults being influenced by information like the number of cores available: It influences the number of JIT compilation threads, hotspot thresholds and limits.

One extreme example: When running ten JVM containers in a 32 core box this means that:
  • Each JVM believes it's alone on the machine configuring itself to the maximally availble CPU count.
  • pre-Java-9 the JVM is not aware of cpusets, meaning it will think that it can use all 32 cores even if configured to use less than that.

Another caveat: JVMs typically need more resources on startup, leading to a need for overprovisioning just to get it started. Jörg promised a blog post to appear on how to deal with this question on the DC/OS blog soon after the summit.

Also for memory Java9 provides the option to look at memory limits set through cgroups. The (still experimental) option for that: -XX:+UseCGroupMemLimitForHeap

As a conclusion: Containers don't hide the underlying hardware - which is both, good and bad.

Goal - question - metric approach to community measurement

In his talk on applying goals question metrics to software development management Jose Manrique Lopez de la Fuente explained how to successfully choose and use metrics in OSS projects.

He contrasted the OKR based approach to goal setting with the goal question metric approach. In the latter one first thinks about a goal to achieve (e.g. "We want a diverse community."), go from there to questions to help understand the path ot that goal better ("How many people from underrepresented groups do we have."), to actual metrics to answer that question.

Key to applying this approach is a cycle that integrates planning, making changes, checking results and acting on them.

Goals, questions and metrics need to be in line with project goals, involve management and involve contributors. Metrics themselves are only useful for as long as they are linked to a certain goal.

What it needs to make this approach successful is a mature organisation that understands the metrics' value, refrains from gaming the system. People will need training on how to use the metrics, as well as transparency about metrics.

Projects dealing with applying more metrics and analytics to OSS projects include Grimoire Lab, CHAOSS (Community Health Analytics for OSS).

There's a couple interesting books: Managing inner source projects. Evaluating OSS projects as well as the Grimoire training which are all available freely online.

Container orchestration - the state of play

In his talk Michael Bright gave an overview of current container orchestration systems. In his talk he went into some details for Docker Swarm, Kubernetes, Apache Mesos. Technologies he left out are things like Nomad, Cattle, Fleet, ACS, ECS, GKE, AKS, as well as managed cloud.

What became apparent from his talk was that the high level architecture is fairly similar from tool to tool: Orchestration projects make sense where there are enough microservices to be unable to treat them like pets with manual intervention needed in case something goes wrong. Orchestrators take care of tasks like cluster management, micro service placement, traffic routing, monitoring, resource management, logging, secret management, rolling updates.

Often these systems build a cluster that apps can talk to, with masters managing communication (coordinated through some sort of distributed configuration management system, maybe some RAFT based consensus implementation to avoid split brain situations) as well as workers that handle requests.

Going into details Michael showed the huge takeup of Kubernetes compared to Docker Swarm and Apache Mesos, up the point where even AWS joined CNCF.

For Thursday I went to see Rich Bowen's keynote on the Apache Way at MesosCon. It was great to hear how people were interested in the greater context of what Apache provides to the Mesos project in terms of infrastructure and mentoring. Also there were quite a few questions on what that thing called The Apache Software Foundation actually is at their booth at MesosCon.

Hopefully the initiative started on the Apache Community development mailing list on getting more information out on how things are managed at Apache will help spread the word even further.

Overall Open Source Summit, together with it's sister events like e.g. KVM forum, MesosCon as well as co-located events like the OpenWRT summit was a great chance to meet up with fellow open source developers and project leads, learn about technologies and processes both familiar was well as new (in my case the QEMU on UEFI talk clearly was above my personal comfort zone understanding things - here it's great to be married to a spouse who can help fill the gaps after the conference is over). There was a fairly broad spectrum of talks from Linux kernel internals, to container orchestration, to OSS licensing, community management, diversity topics, compliance, and economics.

Saturday, 28 October 2017

Dutch coalition agreement: where’s the trust in Free Software?

André Ockers on Free Software » English | 07:14, Saturday, 28 October 2017

The new Dutch government, consisting of liberal-conservatives (VVD), christian democrats (CDA), democrats (D66) and orthodox protestants (CU), published the new coalition agreement: Vertrouwen in de toekomst (“Trust in the future”). I searched through all sections of this document, searching for the word “software”.

According to the new government, software is a matter for the justice department. Software is not mentioned in any other section, including the economic, education, labor policy, innovation policy and living environment sections.

So it’s the minister of justice who deals with software. Software is mentioned at two places in the justice section:

  • The making of a cybersecurity agenda, including the stimulation of companies to make software safer through software liability.
  • Buying hacksoftware for the Dutch intelligence service.

This means software is being seen as:

  • Unsafe, and the state will ensure it’s going to be safer.
  • A tool to further build the Dutch surveillance and control state.

There’s a world of possibilities to use (existing!) Free Software to strengthen the economy, provide the youth with real education and turn the Netherlands into a more innovative and livable part of Europe. Apparently this is not a priority. Where’s the trust in Free Software?

Wednesday, 25 October 2017

Autonomous by Annalee Newitz

Evaggelos Balaskas - System Engineer | 11:09, Wednesday, 25 October 2017


Autonomous by Annalee Newitz

The year is 2144. A group of anti-patent scientists are working to reverse engineer drugs in free labs, for (poor) people to have access to them. Agents of International Property Coalition are trying to find the lead pirate-scientist and stop any patent violation by any means necessary. In this era, without a franchise (citizenship) autonomous robots and people are slaves. But only a few of the bots have are autonomous. Even then, can they be free ? Can androids choose their own gender identity ? Transhumanism and extension life drugs are helping people to live a longer and better life.

A science fiction novel without Digital Rights Management (DRM).

Tag(s): Autonomous, books

Open source summit - Day 2

Inductive Bias | 10:58, Wednesday, 25 October 2017

Day two of Open Source summit for me started a bit slow for lack of sleep. The first talk I went to was on "Developer tools for Kubernetes" by Michelle Noorali and Matt Butcher. Essentially the two of them showed two projects (Draft and Brigade to help ease development apps for Kubernetes clusters. Draft here is the tool to use for developing long running, daemon like apps. Brigade has the goal of making event driven app development easier - almost like providing shell script like composability to Kubernetes deployed pipelines.

Kubernetes in real life

In his talk on K8s in real life Ian Crosby went over five customer cases. He started out by highlighting the promise of magic from K8s: Jobs should automatically be re-scheduled to healthy nodes, traffic re-routed once a machine goes down. As a project it came out of Google as a re-implementation of their internal, 15 years old system called Borg. Currently the governance of K8s lies with the Cloud Native Foundation, part of the Linux Foundation.

So what are some of the use cases that Ian saw talking to customers:
  • "Can you help us setup a K8s cluster?" - asked by a customer with one monolithic application deployed twice a year. Clearly that is not a good fit for K8s. You will need a certain level of automation, continuous integration and continuous delivery for K8s to make any sense at all.
  • There were customers trying to get into K8s in order to be able to hire talent interested in that technology. That pretty much gets the problem the wrong way around. K8s also won't help with organisational problems where dev and ops teams aren't talking with each other.
  • The first question to ask when deploying K8s is whether to go for on-prem, hosted externally or a mix of both. One factor pulling heavily towards hosted solution is the level of time and training investment people are willing to make with K8s. Ian told the audience that he was able to migrate a complete startup to K8s within a short period of time by relying on a hosted solution resulting in a setup that requires just one ops person to maintain. In that particular instance the tech that remained on-prem were Elasticsearch and Kafka as services.
  • Another client (government related, huge security requirements) decided to go for on-prem. They had strict requirements to not connect their internal network to the public internet resulting in people carrying downloaded software on USB sticks from one machine to the other. The obvious recommendation to ease things at least a little bit is to relax security requirements at least a little bit here.
  • In a third use case the customer tried to establish a prod cluster, staging cluster, test cluster, one dev cluster per developer - pretty much turning into a maintainance nightmare. The solution was to go for a one cluster architecture, using shared resources, but namespaces to create virtual clusters, role based access control for security, network policies to restrict which services can talk to each other, service level TLS to get communications secure. Looking at CI this can be taken one level furter even - spinning up clusters on the fly when they are needed for testing.
  • In another customer case Java apps were dying randomly - apparently because what was deployed was using the default settings. Lesson learnt: Learn how it works first, go to production after that.

Rebuilding trust through blockchains and open source

Having pretty much no background in blockchains - other than knowing that a thing like bitcoin exists - I decided to go to the introductory "Rebuilding trust through blockchains and open source" talk next. Marta started of by explaining how societies are built on top of trust. However today (potentially accelerated through tech) this trust in NGOs, governments and institutions is being eroded. Her solution to the problem is called Hyperledger, a trust protocol to build an enterprise grade distributed database based on a permissioned block chain with trust built-in.

Marta went on detailing eight use cases:
  • Cross border payments: Currently, using SWIFT, these take days to complete, cost a lot of money, are complicated to do. The goal with rolling out block chains for this would be to make reconcillation real-time. Put information on a shared ledger to make it audible as well. At the moment ANZ, WellsFargo, BNP Paribas and BNY Mellon are participating in this POC.
  • Healthcare records: The goal is to put pointers to medical data on a shared ledger so that procedures like blood testing are being done just once and can be trusted across institutions.
  • Interstate medical licensing: Here the goal is to make treatment re-imbursment easier, probably even allowing for handing out fixed-purpose budgets.
  • Ethical seafood movement: Here the goal is to put information on supply chains for seafood on a shared ledger to make tracking easier, audible and cheaper. The same applies for other supply chains, think diamonds, coffee etc.
  • Real estate transactions: The goal is to keep track of land title records on a shared ledger for easier tracking, auditing and access. Same could be done for certifications (e.g. of academic titles etc.)
  • Last but not least there is a POC to how how to use shared ledgers to track ownership of creative works in a distributed way and take the middleman distributing money to artists out of the loop.

Kernel developers panel discussion

For the panel discussion Jonathan Corbet invited five different Linux kernel hackers in different stages of their career, with different backgrounds to answer audience questions. The panel featured Vlastimil Babka, Arnd Bergmann, Thomas Gleixner, Narcisa Vasile, Laura Abbott.

The first question revolved around how people had gotten started with open source and kernel development and what advise they would have for newbies. The one advise shared by everyone other than scratch your own itch and find something that interests you: Be persistant. Don't give up.

Talking about release cycles and moving too fast or too slow there was a comment on best practice to get patches into the kernel that I found very valuable: Don't get started coding right away. A lot of waste could have been prevented if people just shared their needs early on and asked questions instead of diving right into coding.

There was discussion on the meaning of long time stability. General consensus seemed to be that long term support really only includes security and stability fixes. No new features. Imaging adding current devices to a 20 year old kernel that doesn't even support USB yet.

There was a lovely quote by Narcisa on the dangers and advantages of using C as a primary coding language: With great power come great bugs.

There was discussion on using "new-fangled" tools like github instead of plain e-mail. Sure e-mail is harder to get into as a new contributor. However current maintainer processes heavily rely on that as a tool for communication. There was a joke about implementing their own tool for that just like was done with git. One argument for using something less flexible that I found interesting: Aparently it's hard to switch between subsystems just because workflows differ so much, so agreeing on a common workflow would make that easier.
  • Asked for what would happen if Linus was eaten by a shark when scuba diving the answer was interesting: Likely at first there would be a hiding game because nobody would want to take up his work load. Next there would likely develop a team of maintainers collaborating in a consensus based model to keep up with things.
  • In terms of testing - that depend heavily on hardware being available to test on. Think like the kernel CI community help a lot with that.

    I closed the day going to Zaheda Bhorat's talk on "Love would you do - everyday" on her journey in the open source world. It's a great motiviation for people to start contributing to the open source community and become part of it - often for life changing what you do in ways you would never have imagined before. Lots of love for The Apache Software Foundation in it.
  • Tuesday, 24 October 2017

    Security by Obscurity

    Planet FSFE on Iain R. Learmonth | 22:00, Tuesday, 24 October 2017

    Today this blog post turned up on Hacker News, titled “Obscurity is a Valid Security Layer”. It makes some excellent points on the distinction between good and bad obscurity and it gives an example of good obscurity with SSH.

    From the post:

    I configured my SSH daemon to listen on port 24 in addition to its regular port of 22 so I could see the difference in attempts to connect to each (the connections are usually password guessing attempts). My expected result is far fewer attempts to access SSH on port 24 than port 22, which I equate to less risk to my, or any, SSH daemon.

    I ran with this alternate port configuration for a single weekend, and received over eighteen thousand (18,000) connections to port 22, and five (5) to port 24.

    Those of you that know me in the outside world will have probably heard me talk about how it’s insane we have all these services running on the public Internet that don’t need to be there, just waiting to be attacked.

    I have previously given a talk at TechMeetup Aberdeen where I talk about my use of Tor’s Onion services to have services that only I should ever connect to be hidden from the general Internet.

    Onion services, especially the client authentication features, can also be useful for IoT dashboards and devices, allowing access from the Internet but via a secure and authenticated channel that is updated even when the IoT devices behind it have long been abandoned.

    If you’re interested to learn more about Onion services, you could watch Roger Dingledine’s talk from Def Con 25.

    Monday, 23 October 2017

    Public Money? Public Code!

    Norbert Tretkowski | 22:11, Monday, 23 October 2017

    <iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

    Open Source Summit Prague 2017 - part 1

    Inductive Bias | 11:18, Monday, 23 October 2017

    Open Source Summit, formerly known as LinuxCon, this year took place in Prague. Drawing some 2000 attendees to the lovely Czech city, the conference focussed on all things Linux kernel, containers, community and governance. The first day started with three crowded keynotes: First one by Neha Narkhede on


    Apache Kafka and the Rise of the Streaming Platform. Second one by Reuben Paul (11 years old) on how hacking today really is just childs play: The hack itself might seem like toying around (getting into the protocol of children's toys in order to make them do things without using the app that was intended to control them). Taken into the bigger context of a world that is getting more and more interconnected - starting with regular laptops, over mobile devices to cars and little sensors running your home the lack of thought that goes into security when building systems today is both startling and worrying at the same time.

    The third keynote of the morning was given by Jono Bacon on what it takes to incentivise communities - be it open source communities, volunteer run organisations or corporations. According to his perspective there are four major factors that drive human actions:

    • People thrive for acceptance. This can be exploited when building communities: Acceptance is often displayed by some form of status. People are more likely to do what makes them proceed in their career, gain the next level in a leadership board, gain some form of real or artificial title.
    • Humans are a reciprocal species. Ever heart of the phrase "a favour given - a favour taken"? People who once received a favour from you are more likely to help in the long run.
    • People form habits through repetition - but it takes time to get into a habit: You need to make sure people repeat the behaviour you want them to show for at least two months until it becomes a habit that they themselves continue to drive without your help. If you are trying to roll out peer review based, pull request based working as a new model - it will take roughly two months for people to adapt this as a habit.
    • Humans have a fairly good bullshit radar. Try to remain authentic, instead of automated thank yous, extend authentic (I would add qualified) thank you messages.

    When it comes to the process of incentivising people Jono proposed a three step model: From hook to reason to reward.

    Hook here means a trigger. What triggers the incentivising process? You can look at how people participate - number of pull requests, amount of documentation contributed, time spent giving talks at conferences. Those are all action based triggers. What's often more valuable is to look out for validation based triggers: Pull requests submitted, reviewed and merged. He showed an example of a public hacker leaderboard that had their evaluation system published. While that's lovely in terms of transparency IMHO it has two drawbacks: It makes it much easier to evaluate known wanted contributions than what people might not have thought about being a valuable contribution when setting up the leadership board. With that it also heavily influences which contribtions will come in and might invite a "hack the leadership board" kind of behaviour.

    When thinking about reason there are two types of incentives: The reason could be invisible up-front, Jono called this submarine rewards. Without clear prior warning people get their reward for something that was wanted. The reason could be stated up front: "If you do that, then you'll get reward x". Which type to choose heavily depends on your organisation, the individual giving out the reward as well as the individual receiving the reward. The deciding factor often is to be found in which is more likely authentic to your organisation.

    In terms of reward itself: There are extrinsic motivators - swag like stickers, t-shirts, give-aways. Those tend to be expensive, in particular if shipping them is needed. Something that in professional open source projects is often overlooked are intrinsic rewards: A Thank You goes a long way. So does a blog post. Or some social media mention. Invitations help. So do referrals to ones own network. Direct lines to key people help. Testimonials help.

    Overall measurement is key. So is concentrating on focusing on incentivising shared value.

    Limux - the loss of a lighthouse

    In his talk, Matthias Kirschner gave an overview of Limux - the Linux rolled out for the Munich administration project. How it started, what went wrong during evaluation, which way political forces were drawing.

    What I found very interesting about the talk were the questions that Matthias raised at the very end:

    • Do we suck at desktop? Are there too many depending apps?
    • Did we focus too much on the cost aspect?
    • Is the community supportive enough to people trying to monetise open source?
    • Do we harm migrations by volunteering - as in single people supporting a project without a budget, burning out in the process instead of setting up sustainable projects with a real budget? Instead of teaching the pros and cons of going for free software so people are in a good position to argue for a sustainable project budget?
    • Within administrations: Did we focus too much on the operating system instead of freeing the apps people are using on a day to day basis?
    • Did we focus too much on one star project instead of collecting and publicising many different free software based approaches?

    As a lesson from these events, the FSFE launched an initiative to drive developing code funded by public money under free licenses.

    Dude, Where's My Microservice

    In his talk Dude, Where's My Microservice? - Tomasz Janiszewski from Allegro gave an introduction to what projects like Marathon on Apache Mesos, Docker Swarm, Kubernetes or Nomad can do for your Microservices architecture. While the examples given in the talk refer to specific technologies, they are intented to be general purpose.

    Coming from a virtual machine based world where apps are tied to virtual machines who themselves are tied to physical machines, what projects like Apache Mesos try to do is to abstract that exact machine mapping away. Is a first result from this decision, how to communicate between micro services becomes a lot less obvious. This is where service discovery enters the stage.

    When running in a microservice environment one goal when assigning tasks to services is to avoid unhealthy targets. In terms of resource utilization instead of overprovisioning the goal is to use just the right amount of your resources in order to avoid wasting money on idle resources. Individual service overload is to be avoided.

    Looking at an example of three physical hosts running three services in a redundant matter, how can assigning tasks to these instances be achieved?

    • One very simple solution is to go for a proxy based architecture. There will be a single point of change, there aren't any in-app dependencies to make this model work. You can implement fine-grained load balancing in your proxy. However this comes at the cost of having a single point of failure, one additional hop in the middle, and usually requires using a common protocol that the proxy understands.
    • Another approach would be to go for a DNS based architecture: Have one registry that holds information on where services are located, but talking to these happens directly instead of through a proxy. The advantages here: No additional hop once the name is resolved, no single point of failure - services can work with stale data, it's protocol independent. However it does come with in-app dependencies. Load balancing has to happen local to the app. You will want to cache name resolution results, but every cache needs some cache invalidation strategy.

    In both solutions you will also still have logic e.g. for de-registrating services. You will have to make sure to register your service only once is successfully booted up.

    Enter the Service Mesh architecture, e.g. based on Linker.d, or Envoy. The idea here is to have what Tomek called a sidecar added to each service that talks to the service mesh controller to take care of service discovery, health checking, routing, load balancing, authn/z, metrics and tracing. The service mesh controller will hold information on which services are available, available load balancing algorithms and heuristics, repeating, timeouts and circuit breaking, as well as deployments. As a result the service itself no longer has to take care of load balancing, ciruict breaking, repeating policies, or even tracing.

    After that high level overview of where microservice orchestration can take you, I took a break, following a good friend to the Introduction to SoC+FPGA talk. It's great to see Linux support for these systems - even if not quite as stable as would be an ideal world case.

    Trolling != Enforcement

    The afternoon for me started with a very valuable talk by Shane Coughlan on how Trolling doesn't equal enforcement. This talk was related to what was published on LWN earlier this year. Shane started off by explaining some of the history of open source licensing, from times when it was unclear if documents like the GPL would hold in front of courts, how projects like proofed that indeed those are valid legal contracts that can be enforced in court. What he made clear was that those licenses are the basis for equal collaboration: They are a common set of rules that parties not knowing each other agree to adhere to. As a result following the rules set forth in those licenses does create trust in the wider community and thus leads to more collaboration overall. On the flipside breaking the rules does erode this very trust. It leads to less trust in those companies breaking the rules. It also leads to less trust in open source if projects don't follow the rules as expected. However when it comes to copyright enforcement, the case of Patrick McHardy does imply the question if all copyright enforcement is good for the wider community. In order to understand that question we need to look at the method that Patrick McHardy employs: He will get in touch with companies for seemingly minor copyright infringements, ask for a cease and desist to be signed and get a small sum of money out of his target. In a second step the process above repeats, except the sum extracted increases. Unfortunately with this approach what was shown is that there is a viable business model that hasn't been tapped into yet. So while the activities by Patrick McHardy probably aren't so bad in and off itself, they do set a precedent that others might follow causing way more harm. Clearly there is no easy way out. Suggestions include establishing common norms for enforcement, ensuring that hostile actors are clearly unwelcome. For companies steps that can be taken include understanding the basics of legal requirements, understanding community norms, and having processes and tooling to address both. As one step there is a project called Open Chain publishing material on the topic of open source copyright, compliance and compliance self certification.

    Kernel live patching

    Following Tomas Tomecek's talk on how to get from Dockerfiles to Ansible Containers I went to a talk that was given by Miroslav Benes from SuSE on Linux kernel live patching.

    The topic is interesting for a number of reasons: As early as back in 2008 MIT developed something called Ksplice which uses jumps patched into functions for call redirection. The project was aquired by Oracle - and discontinued.

    In 2014 SuSE came up with something called kGraft for Linux live patching based on immediate patching but lazy migration. At the same time RedHat developed kpatch based on an activeness check.

    In the case of kGraft the goal was to be able to apply limited scope fixes to the Linux kernel (e.g. for security, stability or corruption fixes), require only minimal changes to the source code, have no runtime cost impact, no interruption to applications while patching, and allow for full review of patch source code.

    The way it is implemented is fairly obvious - in hindsight: It's based on re-useing the ftrace framework. kGraft uses the tracer for inception but then asks ftrace to return back to a different address, namely the start of the patched function. So far the feature is available for x86 only.

    Now while patching a single function is easy, making changes that affect multiple funtions get trickier. This means a need for lazy migration that ensures function type safety based on a consistency model. In kGraft this is based on a per-thread flag that marks all tasks in the beginning and makes waiting for them to be migrated possible.

    From 2014 onwards it took a year to get the ideas merged into mainline. What is available there is a mixture of both kGraft and kpatch.

    What are the limitations of the merged approach? There is no way right now to deal with data structure changes, in particular when thinking about spinlocks and mutexes. Consistency reasoning right now is done manually. Architectures other than X86 are still an open issue. Documentation and better testing are open tasks.

    A REUSE compliant Curl

    free software - Bits of Freedom | 08:47, Monday, 23 October 2017

    A REUSE compliant Curl

    The REUSE initiative is aiming to make free and open source software licenses computer readable. We do this by the introduction of our three REUSE best practices, all of which seek to make it possible for a computer program to read which licenses apply to a specific software package.

    In this post, I'll be introducing you to the steps I took to make cURL REUSE compliant. The work is based on a branch made about three weeks ago from the main curl Git repository. The intent here is to show the work involved in making a mid-sized software project compliant. You can read this post, and reference the Git repository (GitHub mirror) with its reuse-compliant branch to see what this looks like in practice.

    A REUSE compliant Curl

    The reason we decided to work on the curl code base for this demonstration is that it's a reasonably homogenous code base, has a good size for this demonstration, and has an award winning maintainer!

    REUSE curl

    The first two practices in the REUSE practices, which are often the only ones relevant, introduce some clarity around the licenses applicable to each file in a repository. They ensure that for each file, regardless of what kind of file it is, there's a definite and unambiguous license statement. Either in the file itself and if that's not possible, in a standardised location where it's easy to find.

    If the practices are implemented, it's possible to create utilities which easily retrieve the license applicable to a particular source code file, assemble a list of all licenses used in a source code repository, create a list of all attributions which need to go into a binary distribution, or similarly.

    Here are the practices, one by one:

    1. Provide the exact text of each license used

    The curl repository includes code licensed under a variety of licenses, including several BSD variants. The primary license of the software is a permissive license inspired by the MIT license. REUSE practices mandate that when a software includes multiple licenses, these are all included in a directory called LICENSES/.

    This practice intends to make sure each license is included in the source code, such that it can be referenced from the individual source code files. In the current curl repository, only the principal licens for curl is included as a separate file. All other licenses are included in individual copyright headers.

    However, the intent of the REUSE practices here is to make sure a computer can understand what the license snippet is. Merely leaving the license information in the headers doesn't really suffice. We still need a way to identify which text constitute the license.

    Adding them explicitly in the LICENSES/ folder would work for this, as we would then use the License-Filename tag (see later) to reference the explicit license relevant for a file. Another way, which is easier and cleaner in this case, is to copy over the relevant license statement to a DEP5/copyright file. The DEP5/copyright format is designed to be computer readable, and can include custom license texts, which we copy from the individual files.

    So for curl, we will leave the curl license where it is, but add ancillary licenses in a computer readable way later on.

    REUSE practices give the filename for the license file as LICENSE, and not COPYING. This has been amended in the next release version of the REUSE practices to allow for both common variants, and so we opt here to not change the name of the file but to leave it as COPYING.

    2. Include a copyright notice and license in each file

    curl is exemplary in that almost all files have a consistent header, which looks like this:

    Copyright (C) 1998 - 2017, Daniel Stenberg,, et al.

    This software is licensed as described in the file COPYING, which you should have received as part of this distribution. The terms are also available at

    You may opt to use, copy, modify, merge, publish, distribute and/or sell copies of the Software, and permit persons to whom the Software is furnished to do so, under the terms of the COPYING file.

    This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or implied.

    The REUSE practices are explicit in that we should not change the header, but we can (and in this case should) add information to it: a reference to the license file, and an SPDX license identifier.

    SPDX license identifiers aren't new, but they're starting to make inroads into larger code bases (such as the Linux kernel) for one important reason: it's far easier to parse and understand what a well-known tag with a well-known content means, than to parse a license file.

    For the SPDX license identifier, curl is a special case. While the license is MIT inspired, it is not an exact copy of the MIT license. It's a free and open source software license, but we can not use the default MIT license identifier. Had the curl license not been included in the SPDX license list, we would have opted to not include an SPDX license identifier.

    However, the curl license has been explicitly included in the SPDX license list with the name curl. So we use this reference in our identifier:

    SPDX-License-Identifier: curl

    The REUSE practices also give that we should include a reference to the license file. The reference is already there, but it doesn't make use of the REUSE practices License-Filename tag, and as such, it's computer readable. Adding the License-Filename tag with the name of the license file will ensure tools supporting REUSE compliant source code can understand the reference to the license filename without previously having encountered the format of the curl headers.

    License-Filename: COPYING

    This makes the license, and the reference to the license file, very clear, and making these two additions to the copyright headers, resolve the situation for the majority of included files in the repository.

    It's worth noting that adding both is relevant. The License-Filename tag is more specific than the SPDX-License-Identifier and doesn't depend on an external repository to convey information, but including the SPDX-License-Identifier tag also means generic tools working with SPDX can parse the source code, regardless of if supportig the full REUSE practices or not.

    We fix up the headers with the following two sed scripts (improvements welcome!):

    /^# This software is distributed/,/^# KIND, either express or implied./c\
    # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY\
    # KIND, either express or implied.\
    # License-Filename: COPYING\
    # SPDX-License-Identifier: curl


    /^ \* This software is distributed/,/^ * KIND, either express or implied./c\
    @*@ This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY\
    @*@ KIND, either express or implied.\
    @*@ License-Filename: COPYING\
    @*@ SPDX-License-Identifier: curl

    We run these with:

    $ find . -type f -exec sed -i -f sed-hash.script {} \;
    $ find . -type f -exec sed -i -f sed-star.script {} \;
    $ find . -type f -exec sed -i 's/^@\*@/ */' {} \;

    (The trick with @*@ is to preserve proper formatting since sed has a tendency to want to strip spaces. Unfortunately for us, there are plenty of files with other types of comments, some starting with .\" * for man pages, others with # * and yet others with rem *, so some manual work is needed for this.)

    A good way to find problems is to do a git diff and look for lines removed. Since we never intend to remove any information, but only add to it, anytime a git diff flags a line as having been removed, there's a fair chance we've done something wrong.

    The curl repo includes 2783 files. Adding the SPDX license identifier and license filename to the headers leave us with 1693 files remaining.

    A lot of the remaining files concern test cases (files in tests/data) and documentation which can not include copyright headers.

    The REUSE practices offer two ways of resolving this. Either add one supplementary file for each file which can not include a copyright header. Name this supplementary file FILENAME.license and include in it the standard copyright header. We don't want to do this, as it would add some 1693 additional files to the repository!

    The other way is to make a single file, in this case in the DEP5/copyright file format, which documents the license of each file which can not in itself include a license header.

    In a debian/copyright file, we can include license information such as:

    Files: tests/data/*  
    Copyright: Copyright (c) 1996 - 2017, Daniel Stenberg, <>  
    License: curl  

    This allows us to get rid of a large chunk of files which can not have a header. This gets us down to about 289 files remaining, which do in one way or another require some manual processing.

    For many, they can include headers, but for various reasons, this has been forgotten. This is the case for winbuild/ which was committed at the same time as winbuild/ I didn't look deeper at the commit history, but the latter includes a proper header; the former does not.

    For most files which can include a copyright header, we've added the SPDX-License-Identifier and License-Filename tags to the header, but we did not add the full curl header. It would be up to the curl developers to determine whether a file should have a curl header, and if so, what to include in the header in terms of copyright information.

    The case of Public Domain

    lib/md4.c is in the public domain, or in the absence of this under a very simplified BSD license. There are excellent reasons for why public domain doesn't have an SPDX license identifier, so this file is left untouched. Debian has opted, in their repository, to explicitly mark the file as in the public domain. We do the same. But as the public domain is a concept which differs by jurisdiction, it is up to the final recipient to make the judgement about whether the file can be used.

    Important lesson: do pick a license, even if it's a simple one, which does the same thing as dedicating a file to the public domain. Don't just slap "public domain" on a file and hope all is well.

    Why we need source-level information

    tests/python_dependencies/impacket/ and related files serve a good example of why our principles ask for as much information as possible to be included in the source code files themselves. These files have the following copyright header:

    # Copyright (c) 2003-2016 CORE Security Technologies
    # This software is provided under a slightly modified version
    # of the Apache Software License. See the accompanying LICENSE file
    # for more information.

    Unfortunately of course, as often happens, these files have been copied without being accompanied by the corresponding LICENSE file. In fact, the curl repository contains no file at all called LICENSE, which can leave one to wonder: what does the "slightly modified" version look like?

    The answer can be found by looking up the original repository from where these files were taken. It's mainly an Apache 1.1 license with "Apache" replaced by "CORE Security Technologies".

    This is one situation where it is warranted to add this obviously missing license information to the repository, and update the header with a License-Filename indicating the right license file. We can not add an SPDX license identifier as there are modifications to the original license (even if they are minor).

    Do note that for consistency with the header, I add the license file from the original repository in the impacket directory, and not in the top level LICENSES/ directory which the REUSE practices recommend. The location of the licenses is a SHOULD requirement, however, so we can violate it here, as long as we follow the MUST requirement of actually including all license files.

    The original repository is somehow inconsistent in its licensing though. Two files, and are indicated in the LICENSE file as being licensed under a custom license, and not the modified Apache license.

    However, the individual files have headers which indicate the license is the modified Apache license, with a reference to the LICENSE file. This would ideally be clarified upstream, but since the LICENSE file includes both licenses and an explanation of the situation, referencing it from the copyright header at least ensures the recipient receive as much information as is available upstream.

    OpenEvidence licensed files

    curl contains a small number of files licensed by the OpenEvidence Project, using a license inspired by the OpenSSL license, but using different advertisement clauses. Specifically, in one of the files docs/examples/curlx.c (which, admittedly, is not included in the builds), the license advertisement clause is given as:

     * 6. Redistributions of any form whatsoever must retain the following
     *    acknowledgments:
     *    "This product includes software developed by the OpenEvidence Project
     *    for use in the OpenEvidence Toolkit (
     *    This product includes software developed by the OpenSSL Project
     *    for use in the OpenSSL Toolkit ("
     *    This product includes cryptographic software written by Eric Young
     *    (  This product includes software written by Tim
     *    Hudson ("

    While the license is very similar to the OpenSSL license, we can not use the OpenSSL SPDX identifier in this case, since the obligations are different. While the same can happen with the BSD licenses as well, SPDX deal with the two differently.

    In BSD-4-Clause, as one example, the text representation of the license included in SPDX has a variable for the attribution requirement:

    This product includes software developed by the <<var;name=organizationClause3;original=the organization;match=.+>>.  

    This should then, in theory, enable the license to be matched regardless of what organisation is specified in the license, and license scanners would know to expect an organisation name in this place. The same isn't true of the OpenSSL entry in SPDX which means that OpenSSL means precisely that; OpenSSL, without any variables or deviations in the text. So it would not match against the OpenEvidence license.

    For this file, we'll use, as Debian does, the convention of specifying the license "other" in the DEP5/copyright file and including the license header license text in full.

    Finding the copyright holder

    It's worth noting some files in curl has no author or copyright information given. Such is the case of packages/vms/curl_release_note_start.txt and related files. We can infer from the Git log who the author might be, but the REUSE practices should not be interpreted as an archaeological expedition! You have to decide for yourself the length you go to in this.

    From a project perspective, it might sometimes be useful to document this, but for a project like curl, whose list of contributors is upwards of 1600 people, untangling this becomes a project as a whole, and might not even be relevant.

    So the priority becomes identifying the right license for the files included. If some files are under a different license from the one covering most of the other distribution, this would be important to note. But do solve one problem at a time. Digging through to identify every single copyright holder would be time consuming, prone to errors, and in most cases not answer to a problem anyone has.

    3. Provide an inventory for included software, but only if you can generate it automatically.

    For curl, we will deal with this practice in an easy way: we simply won't do it. Ideally, we should ship, together with curl, or generated at build time, a bill of material of included software with their copyrights and licenses marked. There are some initiatives and tooling which would be helpful in this, but currently, providing a complete inventory would be more trouble than it's worth.

    If we did provide an inventory, the likelihood of it not being updated and maintained is significant. So since we can't do it automatically right now, we will not.

    Parsing a REUSE compliant repository

    Having passed through the REUSE practices, added the appropriate license headers and the DEP5/copyright file, where does this leave us? It leaves us in a state where finding the license of a source file included in curl is easy and can be automated.

    1. If the file includes the SPDX-License-Identifier tag, then the tag value corresponds to the license from the SPDX license list.
    2. If the file includes the License-Filename tag, then the tag value corresponds to the file containing the actual license text in the repository. This tag takes precedence over the SPDX license identifier.
    3. If there are no SPDX or License-Filename tags, look for a file with the same name with the suffix .license. If it exists and contains the tags in (1) and (2), parse them the same way as if they were included in the file itself.
    4. If there's a debian/copyright file, match the filename against it, and if found, extract the license indicated.
    5. If neither of the above works, the repository is not REUSE compliant.

    Where to next?

    This has been an example and demonstration of the work involved in making a repository REUSE compliant. We will continue to review the REUSE practices and release further guidance in the future, but more importantly: we hope others will pick up this work and include support for REUSE compliant repositories in tools which serve to understand software licensing.

    We're also looking forward to see more tools being built in general. One of our interns, Carmen, is currently working on a tool which would lead to the generation of a lint checker for REUSE compliance. That's one of many tools needed to help us on the way towards making copyrights and licenses computer readable. And computer understandable.

    Sunday, 22 October 2017

    Free Software Efforts (2017W42)

    Planet FSFE on Iain R. Learmonth | 22:00, Sunday, 22 October 2017

    Here’s my weekly report for week 42 of 2017. In this week I have replaced my spacebar, failed to replace a HDD and begun the process to replace my YubiKey.


    Eariler in the week I blogged about powerline-taskwarrior . There is a new upstream version available that includes the patches I had produced for Python 2 support and I have filed #879225 to remind me to package this.

    The state of emscripten is still not great, and as I don’t have the time to chase this up and I certainly don’t have the time to fix it myself, I’ve converted the ITP for csdr to an RFP.

    As I no longer have the time to maintain, I have released this domain name and published the sources behind the service.

    Tor Project

    There was a request to remove the $ from family fingerprint on Atlas. These actually come from Onionoo and we have decided to fix this in Onionoo, but I did push a small fix for Atlas this week that makes sure that Atlas doesn’t care if there are $ prefixes or not.

    I requested that a Trac component be created for metrics-bot. I wrote a seperate post about metrics-bot.

    I also attended the weekly metrics team meeting.


    I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

    I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

    I do not find it likely that I’ll be travelling to Cambridge for the miniDebConf as the train alone would be around £350 and hotel accomodation a further £600 (to include both me and Ana).

    Call for sessions at the FSFE assembly during 34C3

    English Planet – Dreierlei | 12:35, Sunday, 22 October 2017

    From December 27 to 30, there will be the 34th Chaos Communication Congress happening in Leipzig. As in recent years, the FSFE is happy to host an assembly that includes an information booth, self-organised sessions and a meeting point for all friends of Free Software to come together, share or simply relax. This is our call for participation.

    <figure class="wp-caption alignright" id="attachment_1936" style="width: 300px"><figcaption class="wp-caption-text">Free Software song sing-along at the FSFE-assembly during 33C3</figcaption></figure>

    With the CCC moving from Hamburg to Leipzig, there are not only logistic changes to be done but also some organisational changes. We are still figuring out the details, but in the context of this call, one of the major changes will be the loss of free available rooms to book for self-organised sessions. Instead, assemblies that match with each other are asked to cluster around 1 of several stages and use that as a common stage for self-organized sessions together. To make the most of this situation, the FSFE will for the first time not join the Noisy Square this year but form a new neighbourhood with other freedom fighting NGOs – in particular with our friends from European Digital Rights. However, at this point of time, we do not yet have more information about the concrete or final arrangements.

    Call for session

    Regardless of those details that still need to be sorted out, this is our call for participation. Sessions can be inspiring talks, hands-on workshops, community/developer/strategy meetings or any other public, informative or collaborative activity.

    Topics can be anything that is about or related to Free Software. We welcome technical sessions but we also encourage to give non-technical talks that address philosophical, economical or other aspects of/about Free Software. We also like sessions about related subjects that have a clear connection to Free Software for example privacy, data protection, sustainability and similar related topics. Finally, we welcome all backgrounds – from your private project to global community projects.

    You have something different in mind? For our friends, it is also possible to have informal meetings, announcements or other activities at our assembly. In this case, get in contact with me (OpenPGP) and we figure it out.

    <figure class="wp-caption aligncenter" id="attachment_1940" style="width: 580px"><figcaption class="wp-caption-text">Crowded room during What makes a secure mobile messenger? by Hannes Hauswedell, one of our sessions during 33C3.</figcaption></figure>


    If you are interested in hosting a session at the FSFE assembly, please apply no later than

    * Sunday, November 19, 18:00 UTC *

    by sending an email to Erik Albers (OpenPGP) with the subject “Session at 34C3” and use the following template:

    Title: name of your session
    Description: description of your session
    Type: talk / discussion / meeting / workshop …
    Tags: put useful tags here
    Link: (if there is a helpful link)
    Expected number of participants: 20 or less / up to 40 / up to 100
    About yourself: some words about you/your biography

    You will be informed latest on Monday, November 27, if your session is accepted.

    Good to know

    • If your session is accepted we happily take care of its proper organisation, publicity and everything else that needs to be done. You are welcome to simply come and give/host your session : )
      But this is neither a guarantee for a ticket nor do we take care of your ticket! Check the CCC-announcements and get yourself a ticket in time!
    • You do not need to be a supporter of the FSFE to host a session. On the contrary, we welcome external guests.
    • Please share this call with your friends or your favorite mailing list.

    Related information:

    For your inspiration:

    The Catalan experience

    agger's Free Software blog | 09:28, Sunday, 22 October 2017

    Yesterday, I went to the protest in Barcelona against the incarceration of the leaders of Omnium and ANC, two important separatist movements.

    The Catalan question is complex, and there are lots of opinions on all sides. However, after speaking with a lot of people down here and witnessing a quite large demonstration – as shown in these photos – it seems clear that Catalan nationalism is *not* about excluding anyone the way Danish racism and British UKIP-ism is.

    After all, Catalonia has been an immigration destination for years, and people are used to living together with two or more languages, with family members from all over Spain. The all-too-familiar right-wing obsession with Islam and the “terror threat” is conspicuously absent from Catalan politics.

    And it’s not all about language or regional identity, as many Spanish-speaking people with origins in other parts of Spain wholeheartedly support Catalan indepence.

    Rather, it’s about a rejection of and rebellion against the Spanish state which is seen as oppressive and riddled by remnants of Francoism. The slogans were radical: “Fora les forces d’ocupació”, “out with the occupation forces!” and “the streets will always be ours!”

    Indeed, for many of the young people it seems to be about getting rid of the Spanish state in order to implement a much more leftist policy on all levels of society – as one sign had it, “we’re seditious, we want to rebel and declare indepence and have a revolution!” First independence, afterwards people will take charge themselves, seems to be the sentiment.

    “The people rules and the government obeys!” – is another slogan. The conservative forces behind Puigdemont (the current president) may have other ideas, but for now these are the people they have allied themselves with – people who actually believe in the direct rule of the people themselves. Looking at the people present in the demo, it’s clear that it’s a really broad section of society – old and young, but everybody very peaceful and friendly. There were so many people in the streets that it was getting too much, some especially old people had to be escorted out through the completely filled streets.

    The European Union may have decided that Catalans should forget all about independence for the sake of the peace of mind of everyone, but these people honestly don’t seem to give a damn.

    22553253_777264459124827_2886093528424466394_o 22555492_777264152458191_1709007909184149325_o 22550348_777263749124898_6054010246909111227_o 22552959_777263275791612_2905689775816324492_o 22553176_777263385791601_3743735690650421196_o 22550254_777263345791605_5027012145955183903_o 22550270_777262515791688_7865849037065231016_o 22769824_777262315791708_417350780511116324_o 22552370_777262439125029_7317470391949472226_n

    Saturday, 21 October 2017

    Using Gitea and/or Github to host blog comments

    Posts on Hannes Hauswedell's homepage | 16:00, Saturday, 21 October 2017

    After having moved from FSFE’s wordpress instance I thought long about whether I still want to have comments on the new blog. And how I would be able to do it with a statically generated site. I think I have found/created a pretty good solution that I document below.

    How it was before

    To be honest, Wordpress was a spam nightmare! I ended up excluding all non-(FSFE-)members, because it was just too difficult to get right. On the other hand I value feedback to posts so what to do?

    This blog is now statically generated so it is not designed for comments anyway. The most common solution seems to be Disqus which seems to work well, but be a privacy nightmare. It hosts your comments on their server and integrates with all sorts of authentication services, of course sharing data with them et cetera. Not exposing my site visitors to tracking is very important to me and I also don’t want to advertise using your Facebook login or some such nonsense.

    A good idea

    However, I had vague memories of having read this article a while ago so I read up on it again:

    The idea is to host your comments in a GitHub bug tracker and load them dynamically via Javascript and the GitHub-API. It integrates with GoHugo, the site-generator I am also using, so I thought I’d give it a try. Please read the linked article to get a clearer picture of the idea.

    Privacy improvements and other changes

    It all worked rather well, but there were a few things I was unhappy with so I changed the following:

    • In addition to GitHub, it now works with Gitea, a Free Software alternative, too; this includes dynamically generating Markdown from the comments via ShowdownJS, because Gitea’s API is less powerful than GitHub’s.
    • The comments are not loaded automatically, but on-demand (so visitors don’t automatically make requests to other servers).
    • It is possible to have multiple instances of the script running, with different server types, target domains and/or repos.
    • Gracefully degrade and offer external links if no Javascript is available.
    • Some visual changes to fit with my custom theme.

    You can see the results below. I am quite happy with the solution as many of my previous readers from FSFE can still use FSFE’s infrastructure to reply (in this case FSFE’s gitea instance). I expect many other visitors to have a GitHub account so they don’t need to sign up for another service. I am aware this still relies on third parties and that GitHub may at some point commodify the use of its API, but right now it is much better than to store and share the data with a company whose business model this already is. And it is optional.

    And of course the blog itself will remain entirely free of Javascript!

    The important files are available in this blog’s repo:

    What do you think? Feel free to adapt this for your blog and thanks to Don Williamson for the original implementation!

    Friday, 20 October 2017

    Presenting Baobáxia at the 2017 Plone conference

    agger's Free Software blog | 13:28, Friday, 20 October 2017

    Baobáxia at the 2017 Plone conference

    Today, I presented the Baobáxia project at the 2017 Plone Conference in Barcelona. Check out the slides for the talk for more information.

    Thursday, 19 October 2017

    KDE Edu sprint 2017 in Berlin

    TSDgeos' blog | 21:29, Thursday, 19 October 2017

    I had the privilege to attend the KDE Edu sprint in Berlin that happened from the 6th to the 9th of October.

    There i mostly worked in the KTuberling port to Android. If you have children (or maybe if you want to feel like one for a few minutes) and an Android device please try it and give some constructive feedback ;)

    Though of course that's not all we did, we also had important discussions about "What is kde edu", about how we should be involved in the "Making KDE software the #1 choice for research and academia" KDE goal and other organization stuff like whether we want a phabricator rule to send email to the kdeedu mailing list for a set of projects, etc.

    Thanks go to all the people that donate to KDE e.V. that made sponsoring the trip possible, and to Endocode for hosting us and sponsoring all kind of interesting drinks and pizza on Sunday :)

    FOSDEM 2018 Real-Time Communications Call for Participation - fsfe | 08:33, Thursday, 19 October 2017

    FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

    This email contains information about:

    • Real-Time communications dev-room and lounge,
    • speaking opportunities,
    • volunteering in the dev-room and lounge,
    • related events around FOSDEM, including the XMPP summit,
    • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
    • the Planet aggregation sites for RTC blogs

    Call for participation - Real Time Communications (RTC)

    The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

    The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

    To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

    To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

    Speaking opportunities

    Note: if you used FOSDEM Pentabarf before, please use the same account/username

    Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

    Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

    You can find the full list of dev-rooms on this page and apply for a lightning talk at

    Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

    First-time speaking?

    FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

    Submission guidelines

    The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

    In the "Submission notes", please tell us about:

    • the purpose of your talk
    • any other talk applications (dev-rooms, lightning talks, main track)
    • availability constraints and special needs

    You can use HTML and links in your bio, abstract and description.

    If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

    We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

    Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

    Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

    Volunteers needed

    To make the dev-room and lounge run successfully, we are looking for volunteers:

    • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
    • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
    • participation in the Real-Time lounge
    • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
    • circulating this Call for Participation (text version) to other mailing lists

    Related events - XMPP and RTC summits

    The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

    Social events and dinners

    The traditional FOSDEM beer night occurs on Friday, 2 February.

    On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

    Spread the word and discuss

    If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

    If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

    Planet site Admin contact
    All projects Free-RTC Planet ( contact
    XMPP Planet Jabber ( contact
    SIP Planet SIP ( contact
    SIP (Español) Planet SIP-es ( contact

    Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


    For any private queries, contact us directly using the address and for any other queries please ask on the Free-RTC mailing list.

    The dev-room administration team:

    Tuesday, 17 October 2017

    How to feel happy using your Apple MacBook (again)

    Blog – Think. Innovation. | 19:44, Tuesday, 17 October 2017

    In short: wipe Mac OS and install Elementary OS. In some more words: read on.

    If you are thinking: “I am feeling happy using my MacBook, what is he talking about?”, then open your calendar and make an entry for in 2 years to come back to this post. See you then!

    Yes, in time your MacBook gets slow, right? Using it just does not feel as swift and smooth anymore as it once did. Everything you do is becoming a bit sluggish. Up to a point where it even becomes almost unusable. High time to go buy that new model!

    But wait a minute. Your computer does not become slow at all. In fact, it is exactly as fast as it was when you bought it. Unless you have one of those MacBook’s that is still upgradeable (like I do) and you upgraded the RAM and/or HDD to SSD. Then it is even faster! Let me say it again: your MacBook does not become slow!

    Then why does it feel like it is? That is because Apple is making you install updates and new versions again and again that take up ever more resources from your laptop. And you do not have a choice. Of course in the name of security, a better user experience, more features or a nicer look. But that is just what Apple tells you: in fact the company has every interest to make you feel that your ‘old’ laptop is slow, unfashionable, too heavy and in time even unusable.

    And it is not just Apple that is doing this (it is always easy to pick on the famous kid), the same happens with laptops that run Windows, with tablet computers and smartphones, regardless if they are made by Apple, Samsung or pretty much any other company (FairPhone is hopefully here to stay as an
    enlighning counter example).

    It makes perfect sense. At least, for them. Their primary responsibility is to grow profits, or more accurately, to infinitely grow shareholder value. And they use any means at their disposal to do so, as long as it has a ‘positive business case’. At one time that sounded pretty good, and we benefited from this model a long time, but at the moment it simply is not good enough anymore.

    But enough with the rant already! Back to the question, how to feel happy about using your MacBook again?

    It started as an experiment a few months ago. I was growing a bit bored with using my ThinkPenguin Korora laptop. A fine laptop, do not get me wrong, but not a spectacular piece of hardware. The keyboard is a bit spongy, the screen is so-so and given its all plastic dark gray and black casing, not all that eye-catching. And to be honest, even though I have deep respect for what the people at ThinkPenguin have accomplished, it did not provoke any responses from people like my FairPhone does, which is a nice conversation starter about things that matter.

    So I was considering maybe going for that pretty nice looking Slimbook KDE laptop. But I found the price a bit steep, and buying a new laptop while I still have a perfectly working one, is not that environmentally conscious. In fact, I have several laptops lying around which I do not use (shame on me; want one?).

    And then I saw my annyoingly ‘slow’ 2012 Apple MacBook Pro lying around and thought: would that run GNU/Linux in any acceptable way? Oh, for anyone not familiar with GNU/Linux: it is a so-called “Operating System” like Mac OS and Windows. It comes in over 500 versions (called distributions) instead of the handful that Apple and Microsoft produces. And you can find a GNU/Linux version for pretty much any computer, no matter how old it is. It is also used for servers (90% of the internet runs on it) and is even used in industrial machines.

    How is the GNU/Linux company making this possible? Well, it is not, because there is no such company. GNU/Linux is so-called Open Source, which gives anyone the freedom to use, study, change and share the software. And so many people (and companies) do, resulting in a huge ecosystem that creates value for everyone involved. Needless to say the Open Source is by far the superior way to innovate and its principles are vital in the survival of humankind and this planet (perhaps you guessed it, I am a bit of a fan of Open Source).

    So, I talked my wife out of continuing to use ‘my’ MacBook (it suddenly became ‘mine’ again) and convinced her that another laptop was just as good and I started the experiment (in fact: by the time I am writing this she is using ‘her’ 2009 MacBook White runnig GNU/Linux as well).

    Coincidentally (or?) I stumbled on a fairly new distribution (version) of GNU/Linux called Elementary OS. The team of Elementary OS intends to create an elegant, stylish, yet superfast version that is perfectly suited for people familiair with Mac OS. What a coincidence indeed!

    To keep things simple, I swapped the hard disk (HDD) with a new solid state drive (SSD), so it would be easy to go back to Mac OS if the experiment failed. And as a side benefit, the laptop would even be faster than it was.

    Installing Elementary OS was super simple and went very smooth. Since Elementary OS is so new, I expected it to be somewhat unstable, buggy and not actually suited for everyday use. But in fact I have been using my 2012 Apple MacBook Pro running Elementary OS for 2 months now and the system
    is very stable and I have been using it every day, mostly for work! And I love it! There are some small nuisances, but I found that these are smoothed out as the Elementary OS team keeps publishing updates which can be easily installed. And which do not result in a slower laptop! 😉

    The transition of going from Mac OS and the familiair programs (sorry, I mean apps) you have there is perhaps an entirely new story. As I have been using Linux Mint for quite some time, I did that transition years ago. At first it was not easy, but in the end it is well worth it. Perhaps material for another blogpost? Anyone?

    So, if you also want to feel happy about using your MacBook again, give Elementary OS a try! Hopefully you are fortunate enough to also still be able to swap the drives, that makes the experiment a lot easier. Otherwise, you should be a bit more careful not to mess up your Mac OS partition if you ever decide to go back.

    I created a wiki page on the technical details of running Elementary OS on a 2012 MacBook Pro, you can start from there. Or if you have another model MacBook then the Arch wiki and Ubuntu forums are a great way to start. And of course you can drop me a line as well.

    You can also run the beautiful Elementary OS when you are still feeling happy about your MacBook, but would like to add that warm fuzzy feeling to it that comes with using a great piece of free (as in freedom) software that was made with love by an awesome world-wide community! Sorry you had to wait 2 years for this great piece of advice.

    Oh, not to forget: once you are running GNU/Linux on your laptop, putting a bunch of stickers is mandatory!

    – Diderik

    P.S.: If you like using Elementary OS, then consider donating. That keeps up the good work, as the volunteers making this incredible software also need food on the table. I donated EUR 25 recently. Perhaps that should become a standard yearly recurring thing?

    Planet FSFE (en): RSS 2.0 | Atom | FOAF |

      /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English —  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog