Planet Fellowship (en)

Wednesday, 29 June 2016

Confsl 2016 the Italian Free Software Conference

Alessandro's blog | 07:27, Wednesday, 29 June 2016

Last weekend, from Friday June 24th to Sunday June 26th the Italian FS supporters met in Palermo; we also had John Sullivan from FSF as invited speaker.

A lot happened during the three days, with plenary sessions in the morning and separate tracks in the afternoons. A sad note, however, is the very limited number of attendees. It looks like our communication was not effective and/or there’s something wrong in the approach we take for this conference, now at the 9th edition.

The topic tracks

I attended the track about FS in schools on Friday and the LibreItalia track on Saturday.

Activity in education is vibrant, and the track covered a number of success stories, that can be used as inspiration for others to follow. We went from free text books (Matematica C3, Claudio Carboncini) to a free localized distrubition with special support for disabilities (So.di.Linux, Francesco Fusillo, Lucia Ferlino, Giovanni Caruso). We also had presentations from university and high-school teachers who benefit from using FS in their activities.

LibreItalia is an Italian association rooted in the Document Foundation. The founding and most active members belong to both worlds; the track was led by Sonia Montegiove and Italo Vignoli. After the first presentation, about how LibreOffice is being successfully introduced in school, we switched to English, so the FSF could be part of the discussion, which turned into policy and marketing for FS solutions. A very interesting session I dare say, leveraged by the number of relevant FS people in the room, who brought different points of view to the discussion. One of the most important things I learnt is that we really need a certification program on our technologies. Document Foundation does, and it’s a huge success. I’ve always been against certification as a perverse idea, and I only have my qualification from public education; but I must admit the world works in the exact opposite direction: we need to stamp our hackers with certification marks so they are well accepted by companies and the public administration.

Saturday morning: keynotes

The plenary session was centered on the LibreDifesa project, a migration towards LibreOffice of 150,000 computers within the Italian army. Sonia will update us about this migration at the FSFE summit,
this September in Berlin.

Gen. Sileo, with Sonia Montegiove, presented how the project was born and how it is being developed.
They trained the trainers first, and this multi-layer structure works very well in accompanying people
during the migration from different tools to LibreOffice. The teaching sessions for final users are 3 hours long for Write, 3 hours for Calc, 3 hours for Impress; they prove very useful because users learn how to properly set up a document, whereas most self-learned people ignore the basics of layout and structure, even if they feel proficient with the tool they routinely use. The project, overall, is saving 28 millions of public money.

During the QA session, Sonia also discussed about failed migrations and back-migrations that hit the press recently. Most of them are just a failure (i.e., they back-migrate but do not achieve the expected results), driven by powerful marketing and lobbying forces, that shout about leaving LibreOffice but hide the real story and the real figures.

Later Corrado Tiralongo presented his distribution aimed at professionals, which is meant to be pretty light and includes a few tools that are mandatory for some activities in Italy, like digital signatures and such local stuff.

Finally, Simone Aliprandi stressed the importance of standards: without open standard we can’t have free software, but the pressure towards freeing the standards is very low, and we see problems every day. Simone suggests we need to set up a Free Standards conference, as important as the Free Software conference.

Sunday Morning: FSF and FSFE

The plenary of Sunday morning was John Sullivan and me. John presented FSF and outlined how they are more concerned with users’ freedom than with software. Somebody suggested they could change their name, but we all agree FUF won’t work.

John described the FSF certification mark “Respects your freedom” for devices that are designed to work with Free Software, and the “h-node” web site that lists devices that work well with Free Software drivers. He admits the RYF logo is not the best, as the bell is not recognized by non-US people as a symbol of freedom.

I described what FSFE is, how it is organized. I explained how we are mainly working with decision makers and what we are doing with the European Parliament and Commission. I opened our finance page for full disclosure, and listed names and roles of our employees. Later on, a few people reported they didn’t know exactly what FSFE is doing, and they were happy about our involvement at the upper layers of politics.

Finally, I made a suggestion for a complete change in how confSL is organized and promoted, mainly expanding on what we were discussing on Saturday at dinner time. This part was set up as a “private” discussion, a sort of unrecorded brainstorming, but in the end it was only my brain-dump, as the Q&A part had to be cut because of the usual delays that piled up.

confSL 2017

We hope to be able to set up the next confSL in Pavia, or Milano, with a different design and a special target towards companies and teachers. Also, it will likely be moved to be earlier than June.

Tuesday, 28 June 2016

Fedora 24 released

egnun's blog » FreeSoftware | 04:26, Tuesday, 28 June 2016

As the Fedora Magazine reports the new version 24 of Fedora got released on June 21st. (They should have waited a few more days^^)

To me it seems like quite a normal distro upgrade.
They updated a bunch of software.

Fedora 24 now has the GNOME 3.20 desktop,
Firefox 45
in version 38.7.1
and LibreOffice 5.1.

Notable changes for programmers are the updates of

Python to version 3.5 and of the
to version 2.23.
The GCC is now provided in version 6.
That means that the default mode for  C++ is now -std=gnu++14 instead of -std=gnu++98.
Up to this point nothing really special. But wait! There is more!

Fedora has now added a Astronomy spin!
How cool is that?
Now you can look up to the sky and search for the stars on your computer!
And everything with 100%* Free Software!!

But you don’t just have tools like “Kstars“,
which can accurately simulate the night sky, “from any location on Earth, at any date and time”
you also have software like  the “INDI Library” which “is a cross-platform software designed for automation and control of astronomical instruments.”

As they say on their spins page: “Fedora Astronomy provides a complete set of software, from the observation planning to the final results.”
If you haven’t already downloaded the new Fedora then you should definitely go and check it out.
I have still a few computers left that need to be upgraded.
So I am going to have some fun with the new Fedora in the next days. ;)

If you like Fedora and want to participate, you should check out these sites:


*Well, almost 100%, because Fedora still includes some tiny bits of proprietary firmware.
That’s one of the reasons why it isn’t officially endorsed by the Free Software Foundation.
But you can use Fedora nonetheless, because you probably won’t have any hardware that needs to rely on these blobs.

Monday, 27 June 2016

“Anatomy of a bug fix”

egnun's blog » FreeSoftware | 20:23, Monday, 27 June 2016
There was (or still is) a bug with drawing tablets and Krita under Windows.

This blog post shows how they tried to fix this bug and how they came to the conclusion,

that is was not a problem caused by Krita, but by the proprietary driver that exists since Windows 3.0 (!).

Luckily there are Free Software projects and operating systems , that want us to fix bugs.


DORS/CLUC - an amazing conference

Matthias Kirschner's Web log - fsfe | 04:57, Monday, 27 June 2016

On 11 May I gave the keynote at DORS/CLUC 2016 titled "The long way to empower people to control technology". It was a very good organised conference, and I can just recommend everybody to go there in the next years.

In case you are interested in the talk, feel free to watch the recordings (English, 37 minutes), and give me feedback afterwards.

Again thanks a lot to Svebor Prstačić for inviting me, and Lucijana Dujić Rastić, Krešimir Kroflin, Branko Zečević, Jasna Benčić, Nikola Henezzi, Valerija Olić, Vanja Stojanović, Goran Prodanović, Jure Rastić, Marko Kos, Aleksandar Dukovski, Milan Blažević, Ivan Guštin, Ivana Isadora Devcic, as well as the many other volunteers who made this conference possible, and who thereby enabled others to learn more about software freedom.

Tuesday, 21 June 2016

EU consultation: Which Free Software program shall receive a European Union’s financed audit?

English Planet – Dreierlei | 16:00, Tuesday, 21 June 2016

<figure class="wp-caption alignright" id="attachment_1482" style="width: 300px">If you like to know your software, look into the code<figcaption class="wp-caption-text">If you like to know your software, look into the code</figcaption></figure>

tl;dr: the The European Union runs a public survey about which Free Software program should receive a financed security audit. Take part!

2014, in reaction to the so-called “heartbleed” bug in OpenSSL, the Parliamentarians Max Andersson and Julia Reda initiated the pilot project “Governance and quality of software code – Auditing of free and open source software”. Which is now managed and realised by the European Commission’s Directorate General of Informatics (DIGIT) as the „Free and Open Source Software Auditing“ (EU-FOSSA) project. FOSSA is aiming at improving the security of those Free Software programs that are in use by the European Commission and the Parliament. To achieve this goal, the FOSSA project has three parts:

  • Comparative study of the European institutions’ and free and open source communities’ software development practices and a feasibility study on how to perform a code review of free and open source projects for European institutions.
  • Definition of a unified methodology to obtain complete inventory of free and open source software and technical specifications used within the European Parliament and the European Commission and the actual collection of data.
  • Sample code review of selected free and open source software and/or library, particularly targeting critical software, whose exploitation could lead to a severe disruption of public or European services and/or to unauthorized access.

In addition, FOSSA states that the “project will help improving the security of open source software in use in the European institutions. Equally important, the EU-FOSSA project is about contributing back to the open source communities. Initially, one million dollar have been assigned to FOSSA.

<figure class="wp-caption alignright" id="attachment_1483" style="width: 169px">Choose your favorite<figcaption class="wp-caption-text">Choose your favorite</figcaption></figure>After its first publication of a comparative study about the development methods and security concerns in 14 open source communities with those of 14 software projects in the European Commission and European Parliament, it is time now for the first code review. On this occasion, the EU started a public survey about which software should be the first to be audited by FOSSA. There is a choice among 18 programs given, but it is also possible to propose another one.

It is to expect that such an audit gives important prominence towards existing and new users of the selected Free Software program. Additionally, in such an audit is a lot of work included. If this is done externally, means that existing developers can better spent their time in improving and further developing the program itself. Finally, every active participant in the survey shows to the Parliament the importance and public reception of FOSSA. And more participation might help in the final evaluation, so that this pilot project might hopefully become institutionalised. Hence, please take part! (just takes 1-4 minutes, no account needed)

This is a translation of my article in (German)

Monday, 20 June 2016

WebRTC and communications projects in GSoC 2016 - fsfe | 15:02, Monday, 20 June 2016

This year a significant number of students are working on RTC-related projects as part of Google Summer of Code, under the umbrella of the Debian Project. You may have already encountered some of them blogging on Planet or participating in mailing lists and IRC.

WebRTC plugins for popular CMS and web frameworks

There are already a range of pseudo-WebRTC plugins available for CMS and blogging platforms like WordPress, unfortunately, many of them are either not releasing all their source code, locking users into their own servers or requiring the users to download potentially untrustworthy browser plugins (also without any source code) to use them.

Mesut is making plugins for genuinely free WebRTC with open standards like SIP. He has recently created the WPCall plugin for WordPress, based on the highly successful DruCall plugin for WebRTC in Drupal.

Keerthana has started creating a similar plugin for MediaWiki.

What is great about these plugins is that they don't require any browser plugins and they work with any server-side SIP infrastructure that you choose. Whether you are routing calls into a call center or simply using them on a personal blog, they are quick and convenient to install. Hopefully they will be made available as packages, like the DruCall packages for Debian and Ubuntu, enabling even faster installation with all dependencies.

Would you like to try running these plugins yourself and provide feedback to the students? Would you like to help deploy them for online communities using Drupal, WordPress or MediaWiki to power their web sites? Please come and discuss them with us in the Free-RTC mailing list.

You can read more about how to run your own SIP proxy for WebRTC in the RTC Quick Start Guide.

Finding all the phone numbers and ham radio callsigns in old emails

Do you have phone numbers and other contact details such as ham radio callsigns in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book?

Jaminy is working on Python scripts to do just that. Her project takes some inspiration from the Telify plugin for Firefox, which detects phone numbers in web pages and converts them to hyperlinks for click-to-dial. The popular libphonenumber from Google, used to format numbers on Android phones, is being used to help normalize any numbers found. If you would like to test the code against your own mailbox and address book, please make contact in the #debian-data channel on IRC.

A truly peer-to-peer alternative to SIP, XMPP and WebRTC

The team at Savoir Faire Linux has been busy building the Ring softphone, a truly peer-to-peer solution based on the OpenDHT distribution hash table technology.

Several students (Simon, Olivier, Nicolas and Alok) are actively collaborating on this project, some of them have been fortunate enough to participate at SFL's offices in Montreal, Canada. These GSoC projects have also provided a great opportunity to raise Debian's profile in Montreal ahead of DebConf17 next year.

Linux Desktop Telepathy framework and reSIProcate

Another group of students, Mateus, Udit and Balram have been busy working on C++ projects involving the Telepathy framework and the reSIProcate SIP stack. Telepathy is the framework behind popular softphones such as GNOME Empathy that are installed by default on the GNU/Linux desktop.

I previously wrote about starting a new SIP-based connection manager for Telepathy based on reSIProcate. Using reSIProcate means more comprehensive support for all the features of SIP, better NAT traversal, IPv6 support, NAPTR support and TLS support. The combined impact of all these features is much greater connectivity and much greater convenience.

The students are extending that work, completing the buddy list functionality, improving error handling and looking at interaction with XMPP.

Streamlining provisioning of SIP accounts

Currently there is some manual effort for each user to take the SIP account settings from their Internet Telephony Service Provider (ITSP) and transpose these into the account settings required by their softphone.

Pranav has been working to close that gap, creating a JAR that can be embedded in Java softphones such as Jitsi, Lumicall and CSipSimple to automate as much of the provisioning process as possible. ITSPs are encouraged to test this client against their services and will be able to add details specific to their service through Github pull requests.

The project also hopes to provide streamlined provisioning mechanisms for privately operated SIP PBXes, such as the Asterisk and FreeSWITCH servers used in small businesses.

Improving SIP support in Apache Camel and the Jitsi softphone

Apache Camel's SIP component and the widely known Jitsi softphone both use the JAIN SIP library for Java.

Nik has been looking at issues faced by SIP users in both projects, adding support for the MESSAGE method in camel-sip and looking at why users sometimes see multiple password prompts for SIP accounts in Jitsi.

If you are trying either of these projects, you are very welcome to come and discuss them on the mailing lists, Camel users and Jitsi users.

GSoC students at DebConf16 and DebConf17 and other events

Many of us have been lucky to meet GSoC students attending DebConf, FOSDEM and other events in the past. From this year, Google now expects the students to complete GSoC before they become eligible for any travel assistance. Some of the students will still be at DebConf16 next month, assisted by the regular travel budget and the diversity funding initiative. Nik and Mesut were already able to travel to Vienna for the recent MiniDebConf /

As mentioned earlier, several of the students and the mentors at Savoir Faire Linux are based in Montreal, Canada, the destination for DebConf17 next year and it is great to see the momentum already building for an event that promises to be very big.

Explore the world of Free Real-Time Communications (RTC)

If you are interesting in knowing more about the Free RTC topic, you may find the following resources helpful:

RTC mentoring team 2016

We have been very fortunate to build a large team of mentors around the RTC-themed projects for 2016. Many of them are first time GSoC mentors and/or new to the Debian community. Some have successfully completed GSoC as students in the past. Each of them brings unique experience and leadership in their domain.

Helping GSoC projects in 2016 and beyond

Not everybody wants to commit to being a dedicated mentor for a GSoC student. In fact, there are many ways to help without being a mentor and many benefits of doing so.

Simply looking out for potential applicants for future rounds of GSoC and referring them to the debian-outreach mailing list or an existing mentor helps ensure we can identify talented students early and design projects around their capabilities and interests.

Testing the projects on an ad-hoc basis, greeting the students at DebConf and reading over the student wikis to find out where they are and introduce them to other developers in their area are all possible ways to help the projects succeed and foster long term engagement.

Google gives Debian a USD $500 grant for each student who completes a project successfully this year. If all 2016 students pass, that is over $10,000 to support Debian's mission.

Sunday, 19 June 2016

The Proprietarization of Android – Google Play Services and Apps

Free Software – | 17:11, Sunday, 19 June 2016

Android has a market share of ~70% for mobile devices which keeps growing. It is based on Linux and the Android Open Source Project (AOSP), so mostly Free Software. The problem is that every device you can buy is additionally stuffed full with proprietary software. These programs and apps are missing the essential four freedoms that all software should have. They serve all but your interests and are typically designed to spy on your every tap.

robotFor this reason, the Free Software Foundation Europe started the Free Your Android campaign in 2012 that shows people how they can install F-Droid to get thousands of apps that respect their freedom. It also shows how it is possible to install alternative versions of Android such as Replicant that are either completely Free Software or in the case of OmniROM, CopperheadOS or CyanogenMod mostly Free Software. Many apps have been liberated and the situation for running (more) free versions of Android has been improved tremendously. You can even buy phones pre-installed with Replicant now.

However, there is an opposite trend as well: The growing proprietarization of Android. There have always been Google-specific proprietary apps that came pre-installed with phones, but many essential apps were free and part of AOSP. Over the past years app after app has been replaced by a non-free pendant. This arstechnica article from 2013 has many examples such as the calender, camera and music app. Even the keyboard has been abandoned and useful features such as swipe-typing are only available if you run non-free software.

GPSWhat Google did with the apps, it does even with basic functions and APIs of the Android operating system. They are being moved into the Google Play Services. These proprietary software libraries are pre-installed on all official Android devices and developers are pushed to use them for their apps. So even if an app is otherwise Free Software, if it includes the Google Play Services, it is non-free and will not run anymore on Android devices that do not have the Play Services installed.

One prominent example of that is the crypto-messenger Signal. It uses the Play Services to receive Push notifications from Firebase Cloud Messaging (formally Google Cloud Messaging) to minimize battery usage, but doesn’t work if the Play services are missing. And that is more common than you might think and does not only affect people running Replicant, but also people who like to use Signal on their Blackberry, Jolla or Amazon device. Like with the proprietary Google apps, you need a license from Google if you want to install the Play Services. To get this license, you need to conform with their guidelines which many manufactures don’t want or can not do.

So the proprietarization of Android effectively cripples other versions of Android and makes them increasingly useless. It strengthens Google’s control over Android and makes life harder for competitors and the Free Software community.

But more control, more users for the own products and thus more revenue are not the only reasons for what Google is doing. Almost all Android devices are not being sold by Google, but by OEMs and these are notoriously bad at updating Android on the devices they’ve already sold leaving gaping security holes open and endangering their customers. Still Google is getting the bad press and increasingly bad reputation for it. Since it can’t force the OEMs to provide updates, it moves more and more parts of Android into external libraries it can silently upgrade itself on people’s devices. Android is just becoming a hollow shell for a Google OS.

Still, millions of people have Android devices already and many more will have Android devices in the future. So I think it is time well spent increasing those people’s freedom. Even though Android is more and more crippled it is still a solid base, the community can built upon without having to do their own mobile operating system from scratch without the resources of a large multi-national company.

Free Your Android

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

StickerConstructorSpec compliant swirl

Elena ``of Valhalla'' | 08:15, Sunday, 19 June 2016

StickerConstructorSpec compliant swirl

This evening I've played around a bit with the Sticker Constructor Specification and its, and this is the result:


Now I just have to:

* find somebody in Europe who prints good stickers and doesn't require illustrator (or other proprietary software) to submit files for non-rectangular shapes
* find out which Debian team I should contact to submit the files so that they can be used by everybody interested.

But neither will happen today, nor probably tomorrow, because lazy O:-)

Edit: now that I'm awake I realized I forgot to thank @Enrico Zini Zini and MadameZou for their help in combining my two proposals in a better design.

Source svg

Friday, 17 June 2016

FSFE summit: some updates and introducing our committee

English Planet – Dreierlei | 11:31, Friday, 17 June 2016

<figure class="wp-caption alignright" id="attachment_1475" style="width: 300px">Things are getting more and more concrete<figcaption class="wp-caption-text">Things are getting more and more concrete</figcaption></figure>The settings of the first ever FSFE summit are getting more and more concrete and I like to share the recent steps we made to shed some light on what kind of event we are heading towards. This post will update information about our Call for Participation, our logo and introduce our summit committee. Expect to read more in-depth posts about the venue, the program and its organisation in upcoming blogposts.

What happened so far?

We have been running our Call for Participation for the FSFE summit until May 29 with an overwhelming quality and quantity of submissions. Thanks to all of you who took part or helped to spread the word. It seems like this is the right time for a FSFE summit.

Since the end of our call, in the first week of June, we have installed our summit committee (see below) and have been really excited in reading through the amount of submissions. On the other hand, however, we were also having really a hard time of having to choose and reject so many talks that we would like to hear. Believe me, I made my very best to get as many speaker-slots as possible from the overall QtCon organisation for our FSFE summit. Unfortunately, we still need to reject proposals. And by the way: If you did submit a talk, you will get a confirmation or rejection this week or the next one.

Last week we had an organisators meetup with everyone involved from KDAB, VLC, Qt, KDE and FSFE to bring together and divide all needs and demands of our particular communities under one roof. At this point, my big gratitudes to all of us involved and in particular to Jesper K. Pedersen and Frances Tait from KDAB to offer such a collaborative and respectful environment for all our communities.

Update on the summit’s logo

FSFE summitIn the meantime, our engaging designer Elio Qoshi – in consultance with Open Source Design – came up with a new version of our logo for the FSFE summit. The new version is more balanced and looks more like a modern style. This is highly appreciated, thank you very much Elio.

The Summit Committee

In our Call for Participation, among calling for talks, sessions and workshops, we have been including a call for volunteers to join our “program committee”. The idea behind: we do not only like to run a conference for our community – but also by our community! Obviously, such a demand involves multiple aspects. One of them is to give our community a chance to decide on the program.

To still keep capable of making decisions, we limited the amount of committee members to ten. Sorry for those that have been rejected but our choices have been based on a formula between professional experience (to reflect quality of talks) and/or FSFE contributions (to reflect the affiliation to our community) and/or country of origin (to reflect our European community). Additionally, part of the committee is composed by FSFE staff. This is our president Matthias Kirschner, our executive director Jonas Öberg, our enthusiastic intern Cellini Bedi and myself, head of summit organisation.

Finally, the summit committee is composed by additional six volunteers, two of them being our Fellowship representatives Mirko Boehm and Nicolas Dietrich. Together we form a team of ten people from five European countries. Thanks to all of you for taking your time and helping to shape the first FSFE summit.

After a successful start and given our mutual interest in extending this staff – community experience, we decided to keep the summit committee running during the whole time of organising the conference. Having such a board for further advise in upcoming decisions shall help to shape a summit in the sense and the name of our community.

Thursday, 16 June 2016

Is Signal a threat to Free Software?

Free Software – | 15:03, Thursday, 16 June 2016

SignalSomeone made a lot of noise regarding the LibreSignal shutdown, called it a “new threat to Free Software” and compared it with Tivoization. I was then asked to explain the situation in a discussion with RMS where I gave my own perspective on it. Since I already took the time to write it down, I might as well share it here for others to see:

Signal is a very popular communication system that consists of a server component and two clients. Both clients are Free Software although the Android clients uses a proprietary Google library. The server is Free Software with the exception of parts required for voice communication (last time I checked).

The developers of Signal do not care very much about Free Software and have taken many steps in the past to control the distribution and use of their technology. There’s many different issues I could discuss here, but I will focus on the comparison with Tivoization.

LibreSignal has been created to remove the dependency on the proprietary Google library. Its maintainers have been asked to change the name and the User Agent identifier. They have complied with both requests.

In a recent discussion with the creator of Signal, he made once again clear that while people are free to use his Free Software as long as they don’t use his trademark, they are not free to use the server infrastructure he hosts and pays for to enable his own official Signal clients.

However, no steps have been taken to block LibreSignal from using his server. He would be able to identify LibreSignals clients based on their user agent, but this could of course be changed back easily.

The situation is not comparable to Tivoization at all in my opinion. With Signal you are not prevented to run modified versions of the software on your own hardware. You are free to use the existing server software and host it yourself and you are free to implement your own server and run your modified Signal client with it.

For many people, the problem is that Signal does not federate anymore, so their own server would be cut off from the main network. This and the general hostility to community involvement made the existing LibreSignal developers to abandon the project.

That is all very unfortunate, but no inherent threat to Free Software. In my opinion, people should focus their energies on better technologies and initiatives that respect and encourage community involvement.

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

Wednesday, 15 June 2016

Freedom for whom?

English – Björn Schießle's Weblog | 20:34, Wednesday, 15 June 2016

We want freedom

CC BY SA 2.0 by Quinn Dombrowski

This discussion is really old. Since the first days of the Free Software movement people like to debate to whom the freedom in Free Software is directed? The users? The code? The developers? Often this goes along with a discussion about copyleft vs non-protecting Free Software licenses like the BSD- and the MIT-License. I don’t want to repeat this discussion but look at the question from a complete different angle. I want to look at it from the position of a software company and its business model.

If you talk to Free Software companies you realize, that very few have a business model completely based on Free Software. Most companies add proprietary extensions on top and use this as the main incentive for customers to buy their software. In 2008 Andrew Lampitt coined the term open core to describe this kind of business models. There are many ways to argue in favor of open core. One argument I hear quite often is that the proprietary parts are only useful for large enterprises, so nothing is taken away from the community. This way the community gets reduced to the typical home user, which is a interesting way of looking at it. Why should we make such a distinction? And why does home users deserve software freedom more then large organizations?

I understand that freedom in the context of software is a concept which can sound scary to some companies at the beginning. After all, that was the main reason why Open Source was invented, a marketing campaign for Free Software to make business people feel more comfortable. Interestingly this changes quickly if you go into more details about what software freedom really means. More entrepreneurial freedom, control over the tools they use, software freedom as a precondition for privacy and security, independence, freedom to chose the supplier with the best offering and in case of software development the freedom to build on existing, well established technology instead of building everything from scratch. These are freedoms well understood and appreciated by entrepreneurs and they demand it in many other areas of their daily business. This lead me to the conclusion that software freedom is not only something for home users but it also important for large organizations.

Open core often comes with a important side-effects. Most companies pick a strong copyleft license like the GNU GPL or the GNU AGPL, and then demand that every contributor signs a Contributor License Agreement (CLA). This CLA puts the company in a strong position. They are the only one who can distribute the software under a proprietary license and add proprietary extensions. This effectively removes one of the biggest strengths of copyleft licenses. If you set CLAs aside, copyleft licenses are a great tool to create an ecosystem of equal participants. Equality is really important to make individuals and organizations feel confident that joining the initiative is worthwhile in the long term. Everybody having the same rights and the same duties is the only way to develop a strong ecosystem with many participants. Therefore it is no wonder that projects using CLAs often get slowed down and have a less diverse community.

RedHat was one of the first company which understood that all this, CLA’s and proprietary extensions, do more harm than good. It slows down the development. It keeps your community smaller as necessary and it adds the burden to develop all the proprietary extensions by your own instead of leveraging the power of a large community which can consists of employees, hobbyists, partners and customers. This goes so far that RedHat even embrace competitors like CentOS, which basically gives RedHat Enterprise Linux away for free to people who don’t need the support. For a truly open organization this is not a problem but a great opportunity to spread the software and to become more popular. That’s a key factor to make sure, that RedHat is the de facto standard if it comes to enterprise GNU/Linux distributions.

If a initiative is driven by a strong company it can be useful to move some parts out to a neutral entity. RedHat did this by founding the Fedora project. Another way to do this is by creating a foundation which makes sure that everyone has the same rights. Such a foundation should hold all rights necessary to make sure the project can continue no matter what happens to individual participants, including companies. For the governance of such a foundation it is important that it is not controlled by a single entity.

This is exactly what makes me feel so excited about what we are doing at Nextcloud. We are building a complete free cloud solutions, not only for home users but for everyone. This solution will be much more than just file sync and share, from a company point of view stuff like calendar, contacts and video conferencing will become a first class citizen. All this will be Free Software, developed together with a great community. Home users, customers and partners are invited to be part of it, not just as a consumer but as part of a large and diverse community. Everybody should be empowered to change things to the better. In order to make all this independent from a single company we will set up a foundation. As described above the foundation will make sure that we have a intact and growing ecosystem with no single point of failure. This guarantees that Nextcloud can survive us and any other participant if needed.

Wednesday, 08 June 2016

A week ago …

Myriam's blog | 20:44, Wednesday, 08 June 2016

my cat Filoue turned 15. She survives her nephew Stupsi, who sadly left us earlier this year.

This reminds me how long I contribute to KDE already :-) She has been sitting next to the keyboard purring on almost everything I did in the last 15 years.

flattr this!

Working to pass GSoC - fsfe | 17:11, Wednesday, 08 June 2016

GSoC students have officially been coding since 23 May (about 2.5 weeks) and are almost half-way to the mid-summer evaluation (20 - 27 June). Students who haven't completed some meaningful work before that deadline don't receive payment and in such a large program, there is no possibility to give students extensions or let them try and catch up later.

Every project and every student are different, some are still getting to know their environment while others have already done enough to pass the mid-summer evaluation.

I'd like to share a few tips to help students ensure they don't inadvertently fail the mid-summer evaluation

Kill electronic distractions

As a developer of real-time communications projects, many people will find it ironic or hypocritical that this is at the top of my list.

Switch off the mobile phone or put it in silent mode so it doesn't even vibrate. Research has suggested that physically turning it off and putting it out of sight has significant benefits. Disabling the voicemail service can be an effective way of making sure no time is lost listening to a bunch of messages later. Some people may grumble at first but if they respect you, they'll get into the habit of emailing you and waiting for you to respond when you are not working.

Get out a piece of paper and make a list of all the desktop notifications on your computer, whether they are from incoming emails, social media, automatic updates, security alerts or whatever else. Then figure out how to disable them all one-by-one.

Use email to schedule fixed times for meetings with mentors. Some teams/projects also have fixed daily or weekly times for IRC chat. For a development project like GSoC, it is not necessary or productive to be constantly on call for 3 straight months.

Commit every day

Habits are a powerful thing. Successful students have a habit of making at least one commit every day. The "C" in GSoC is for Code and commits are a good way to prove that coding is taking place.

GSoC is not a job, it is like a freelance project. There is no safety-net for students who get sick or have an accident and mentors are not bosses, each student is expected to be their own boss. Although Google has started recommending students work full time, 40 hours per week, it is unlikely any mentors have any way to validate these hours. Mentors can look for a commit log, however, and simply won't be able to pass a student if there isn't code.

There may be one day per week where a student writes a blog or investigates a particularly difficult bug and puts a detailed report in the bug tracker but by the time we reach the second or third week of GSoC, most students are making at least one commit in 3 days out of every 5.

Consider working away from home/family/friends

Can you work without anybody interrupting you for at least five or six hours every day?

Do you feel pressure to help with housework, cooking, siblings or other relatives? Even if there is no pressure to do these things, do you find yourself wandering away from the computer to deal with them anyway?

Do family, friends or housemates engage in social activities, games or other things in close proximity to where you work?

All these things can make a difference between passing and failing.

Maybe these things were tolerable during high school or university. GSoC, however, is a stepping stone into professional life and that means making a conscious decision to shut those things out and focus. Some students have the ability to manage these distractions well, but it is not for everybody. Think about how leading sports stars or musicians find a time and space to be "in the zone" when training or rehearsing, this is where great developers need to be too.

Some students find the right space in a public library or campus computer lab. Some students have been working in hacker spaces or at empty desks in local IT companies. These environments can also provide great networking opportunities.

Managing another summer job concurrently with GSoC

It is no secret that some GSoC students have another job as well. Sometimes the mentor is aware of it, sometimes it has not been disclosed.

The fact is, some students have passed GSoC while doing a summer job or internship concurrently but some have also failed badly in both GSoC and their summer job. Choosing one or the other is the best way to succeed, get the best results and maximize the quality of learning and community interaction. For students in this situation, now it is not too late to make the decision to withdraw from GSoC or the other job.

If doing a summer job concurrently with GSoC is unavoidable, the chance of success can be greatly increased by doing the GSoC work in the mornings, before starting the other job. Some students have found that they actually finish more quickly and produce better work when GSoC is constrained to a period of 4 or 5 hours each morning and their other job is only in the afternoon. On the other hand, if a student doesn't have the motivation or energy to get up and work on GSoC before the other job then this is a strong sign that it is better to withdraw from GSoC now.

Monday, 06 June 2016

World IPv6 Day and how to add IPv6 support to a web server

Seravo | 11:43, Monday, 06 June 2016

Today, 6th of June we celebrate World IPv6 day. Ipv6 is the new standard for IP protocols. IPv6 is important because, as everyone by now should know, the public IPv4 address space is running out. In fact all IPv4 address blocks have already been mostly consumed by registrars and the resource problem is being avoided by recycling old addresses and using NAT-techniques, which cause problems for many VoIP and P2P services. IPv6 also offers better network security (IPsec) built in.


We already wrote about taking IPv6 in use on a workstation earlier. Now we’ll explain how to add it to a web server.

Note, we are deliberately not using old school commands (ifconfig, route, netstat) below, but instead the modern versions (ip and ss). It is time to move on with modern utilities too.

First you need to add a IPv6 address to your network interface. On a Linux server to add a static address edit the file /etc/network/interfaces like this:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
 # dns-* options are implemented by the resolvconf package, if installed

iface eth0 inet6 static
 address 2a00:14c0:1:307:aa51::158
 netmask 64
 gateway 2a00:14c0:1:307::1

The restart the network so that changes take effect:

$ sudo /etc/init.d/networking restart

The interface configuration should show something in the lines of:

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 52:54:00:87:05:3c brd ff:ff:ff:ff:ff:ff
 inet brd scope global eth0
 inet6 2a00:14c0:1:307:aa51::158/64 scope global 
 inet6 fe80::5054:ff:fe87:53c/64 scope link 

You may test from another machine running:

$ ping6 2a00:14c0:1:307:aa51::158

You can verify that the routes are sensible using using the -6 option:

$ ip -6 route
2a00:14c0:1:307::/64 dev eth0 proto kernel metric 256 expires 2592027sec
fe80::/64 dev eth0 proto kernel metric 256 
default via fe80::20c:86ff:fe14:d038 dev eth0 proto kernel metric 1024 expires 1665sec hoplimit 64
default via 2a00:14c0:1:307::1 dev eth0 metric 1024

For route testing you can use the tool traceroute6. To check that the traffic really flows correctly, try logging in remotely using ssh -6.

Domain name records for IPv6

Now open in your browser http://[2a00:14c0:1:307:aa51::158]/ to see if you web server servers anything for the IPv6 address. Alternatively you can use the website if you don’t yet have IPv6 configured on your workstation.

Now when the service has a IPv6 address, you can start advertising it by adding a DNS AAAA record after your normal A record: A AAAA 2a00:14c0:1:307:aa51::158

If you want to be extra cool, add a subdomain ipv6 (e.g., With this address it is very visual to test if IPv6 works or not on any browser anywhere.

Configuring web servers Apache or Nginx

Next you need to configure you web server to listen for port 80 on both IPv4 and IPv6.

In Apache you don’t need to configure anything special, as long as the Listen and NameVirtualHost directives don’t specify an IP-address but just port 80 in a generic way:

NameVirtualHost *:80
Listen 80

For Nginx, add inside the server { } section the following line:

server {
  listen [::]:80 default_server;
  listen [::]:443 default_server ssl;

If you are missing a listen line altogether or it looks like listen 80; it means Nginx was listening only for the IPv4 address port 80.

After the change, restart the server (reload is not enough) with:

$ sudo service nginx reload

Confirm with netstat that there indeed are servers listening:

$ sudo ss -6lp
State Recv-Q Send-Q Local Address:Port Peer Address:Port 
LISTEN 0 128 :::http :::* users:(("nginx",25706,28),("nginx",25705,28),("nginx",1826,28))

You can test the result with:

curl -I -6 -v
* About to connect() to port 80 (#0)
*   Trying 2a00:14c0:1:307:aa51::158... connected
HTTP/1.1 302 Moved Temporarily
Server: nginx

Thursday, 02 June 2016

There is no Open Science without the use of open standards and Free Software

English Planet – Dreierlei | 23:43, Thursday, 02 June 2016

<figure class="wp-caption alignright" id="attachment_1423" style="width: 300px">99% of science is more than the final publication<figcaption class="wp-caption-text">99% of science happens before its final publication</figcaption></figure>Recently, the EU ministers responsible for research and innovation decided unanimously that by 2020, all results and scientific publications of publicly and publicly-private funded research must be freely available. This shall be ensured by the mandatory use of Open Access licenses that guarantee free access to publications. In addition “the data must be made accessible, unless there are well-founded reasons for not doing so, for example intellectual property rights or security or privacy issues”.

This important decision is long overdue and a good step towards opening up scientific results. And to give back to the public what they indirectly paid before with their tax money. However, this EU minister’s decision misses the opportunity to declare and understand software as a part of the research. Means to also include the free availability of the software used for publicly funded research. Indeed, there is not a single word in the press release to cover any informatics, software, “computer aided research” or anything alike. Everything that the ministers seem to care about is the final paper that is published as an article.

But we are in year 2016 and it is time to understand that software is information and knowledge – just as any article is.

Software as integral part of modern research

Software is an integral part of nearly all modern research. Beginning from metering, calculations, demonstrations to following stages of statistics, writings, publications … nearly all steps involved in a research project are in need and covered by the use of software.

What does it help the information society if a scientific paper is published in Open Access but all the steps involved towards creating this publication are built by investments in intransparent, closed and proprietary software solutions and data formats? Especially as the creation of a paper often involves many years of investigation and millions of Euros of fundings.

What does it help the science community, if the software that was used to achieve a result is not transparent? Software is really not without fail: miscalculated prison sentences, hardware scanners that randomly alter written numbers or professional software abuse to cheat on emissions by car manufacturers … – How can a researcher really believe in any result of a software that no one is able to look into and prove it to work correctly?

If the EU ministers responsible for research and innovation really aim at opening up scientific knowledge but keep software out of their scope, they do a very selective choice and only open up the very final stage of any research process. Articles are only able to list outcomes and results of a research process. Not including the software to be freely accessible means no one can see, reproduce or test the process itself or the mathematical methods that have been used to achieve these results.

Most researchers would love to have the software free as well

<figure class="wp-caption alignleft" id="attachment_1431" style="width: 300px">To understand a result you often need to analyze all parts of it<figcaption class="wp-caption-text">To understand a result you often need to analyze all parts of it</figcaption></figure>The EU ministers decision also seems not to be in line with what a majority in the scientific world is waiting for. Because having the software and data that is used for research freely accessible is definitely in interest of many researchers.

As an example, at the end of 2015, I was lucky to have been invited to shape the outcome of the JPI Climate symposium “Designing Comprehensive Open Knowledge Policies to Face Climate Change”. JPI Climate is a collaboration between 17 European countries to coordinate jointly their climate research and fund new transnational research initiatives. Given this transnational, European background, JPI Climate was setting up common Open Knowledge Policies. These shall help to find sustainable ways of archiving and ease the exchange of data and results – which in turn shall boost innovation in climate research. Naturally, JPI Climate was calling their members to publish under Open Access. But in contrast to the EU ministers, the JPI Climate does understand that all the way of the research process until the publication of the final article is just as important as the publication itself:

the symposium’s results confirm that access and availability issues are just one issue within the “openness” approach of “Open Knowledge”/“Open Science”; therefore, comprehensive policies (i.e. tackling the whole research cycle) should encompass measures related to “reuse and re-distribution” of data, information and knowledge and “universal participation” when designing, creating, disseminating and evaluating such data, information and knowledge.

Hence, participants on the common symposium agreed that

4. Research data, metadata, software, methods, etc. funded by public bodies should be open/public. Open licensing for data and software avoids collusion with legal restrictions at national or international level.


6. Open software/formats (independent from vendors) should be mandatory for data repositories and Data Management Plans (DMPs). Research Funding Organisations should take the lead and foster changes of business models when dealing with data


In addition to research results and data, open source software (used in the research process) should be mandatory and published under a free license.

Open Science needs Free Software

Obviously, the science community is large steps further than the EU ministers responsible for research and innovation. Many researchers all over Europe already seek for the mandatory use of Free Software in research process and the publication of software whose development was publicly funded. If EU ministers really like to realize their now proclaimed vision that “knowledge should be freely shared”, they should listen to their community and keep to their advise.

It is time to understand that software itself is knowledge and an integral part to create more (scientific) knowledge. And it is time for the European Union to note: There is no Open Science without the use of open standards and Free Software!

Configuring backups at FSFE

free software - Bits of Freedom | 19:39, Thursday, 02 June 2016

Configuring backups at FSFE

During the past month, I've been testing a new backup strategy and system for the FSFE which is built around Duplicity and a remote storage accessible via SSH. While we've gone through some of the attack vectors to see if this makes sense, I would appreciate more eyes on it. Our previous backup system was significantly simpler in a way, consisting of a simple computer with massive amounts of disk, talking directly via ssh to the client machines and running rsnapshot.

The reason we at all set out to configure a new backup system is that our previous one would not be available for much longer, and given our need to store terabytes of backup data, we could not actually (for a reasonable cost) add the needed backup space to one of our current hosts. So instead I decided to investigate the option of separating the storage from the backup controller, with the idea that if the storage is encrypted, we would be more flexible in where and how it is stored, as access would not (only) depend on physical security.

There are three agents of this new system:

  1. Our backup server (BACKUP)
  2. Our client machine which is supposed to be backed up (CLIENT)
  3. A remote storage accessible via SSH and having ~10TB storage (REMOTE)

The flow of our system at the moment is basically as follows:

  1. BACKUP controls and initiates the backups by pulling data from CLIENT,
  2. CLIENT allow BACKUP access to its file system in read-only mode via sshf (this access is enforced by an ssh key specifically starting command="/usr/lib/openssh/sftp-server -R"),
  3. BACKUP runs duplicity and encrypts backups with a specific (non published) GnuPG key where the secret key passphrase is not stored on BACKUP itself but only kept by out system administrators,
  4. Duplicity on BACKUP sends the backup archives to REMOTE

Attack vectors

Here are the attack vectors I've currently considered in this:

  • If CLIENT is compromised, the attacker does not get any access to the backups as there's no authentication in that direction.
  • If BACKUP is compromised, the attacker can remove backups (since BACKUP has access to REMOTE), but is not able to affect CLIENT (as it's a read-only permission).
  • If BACKUP is compromised, an attacker can not read backups as they are encrypted.
  • If REMOTE is compromised, the attacker gets encrypted containers for which they don't have the key for.

Of course, there are issues with this. Given the controlling function of BACKUP, an attacker could for instance try to sneak under the radar and replace or destroy files in transit between BACKUP and REMOTE, ultimately rendering our backup archives invalid. If BACKUP is compromised, even with just read-only permission, it may be possible to read enough data from our servers to find authentication keys or similar which provides a way to escalate the permissions.

Those drawbacks are likely to be the same regardless of what strategy we employ though. One way around them would be that the client machine encrypts data before sending it to the backup server, but this places a stronger emphasis on having local software installed on the client machine, which we try to avoid in favor of a simple ssh connection.

Tuesday, 31 May 2016

Let’s Encrypt NearlyFreeSpeech

English― | 22:22, Tuesday, 31 May 2016

To make TLS the default, I needed to automate certificate renewal process. In doing so, I’ve came up with a script which may be handy to all NearlyFreeSpeech hosting provider clients.

Let’s Encrypt NearlyFreeSpeech uses Lukas Schauer’s ACME client and requires no configuration. To start using it, first get the code onto your local machine by running:

cd ${some_directory?}
git clone

${some_directory?} is an arbitrary directory to put the code into. It may be $HOME for example. Once the package is installed, the following command is sufficient to install certificates on an NFS-hosted site:

${some_directory?}/lets-encrypt-nfs/pull-and-run \
    ./remote --setup \

It will log into your NFS site, fetch all necessary repositories from GitHub, run Let’s Encrypt client, enable TLS on the site and finally describe how to set up a scheduled task which will renew the certificate automatically. Simply follow the instruction and you should be all set.

N.B.: Enabling encryption for the first time turns HTTP→HTTPS redirection for the site on. This can be disabled in NFS site settings under Canonical URL option.

Running on NFS host

The script doesn’t need to be run from local machine; it can be executed on NFS host directly. Getting the code on the machine is analogous to its local installation, i.e.:

git clone \

Once cloning is done, getting certificates is done with the following command:

/home/private/lets-encrypt-nfs/pull-and-run ./renew

Per-Alias Site Root

At the moment, per-alias site roots are not supported by the script.

Why Privacy is more than Crypto

emergency exit | 15:57, Tuesday, 31 May 2016

During the last year hell seems to have frozen over: our corporate overlords neighbours at Apple, Google and Facebook have all pushed for crypto in one way or another. For Facebook (WhatsApp) and Google (Allo) the messenger crypto has even been implemented by none less than the famous, endorsed-by-Edward-Snowden anarchist and hacker Moxie Marlinspike! So all is well on the privacy front! … but is it really?

EDIT: A French version of this post is available here. I can’t verify its correctness, but I trust the translators to have done their best and thank them for the effort!


I have argued some points on mobile messaging security before already and I have also spoken about this in a podcast (in German), but I fealt  I needed to write about it again since there is a lot of confusion about what privacy and security actually mean (in general, but especially in the context of messaging) and recent developments give a false sense of security in my opinion.

I will discuss WhatsApp and the Facebook Messenger (both owned by Facebook), Skype (owned by Microsoft), Telegram (?), Signal (Open Whisper Systems), Threema (owned by Threema GmbH), Allo (owned by Google) and some XMPP clients, as well as briefly touching on ToX and Briar. I will not discuss “features” even if privacy related like “message has been read”-notifiers which are obviously bad. I will also not discuss anonymity which is a related subject, but from my point of view less important if dealing with “SMS-replacement apps” as you actually do know your peers anyway. 



When most people speak of privacy and security of their communication in regard to messaging it is usually about encryption or more precisely the encryption of data in motion, or the protection of your chat message while it is traveling to your peer.

There are basically three ways of doing this:

  1. no encryption: everyone on your local WiFi or some random sysadmin on the internet backbone can read it
  2. transport encryption: connections to and from service provider e.g. WhatsApp server and in between service providers are safe, but service provider can read the message
  3. end-to-end encryption: message is only readable by your peer, but time of communication and participants still known to service provider

Also there is something called perfect forward secrecy which (counter-intuitively) means that past communications cannot be decrypted even if your long-term key is revealed/stolen.

Back in the day most apps including WhatsApp where actually of category 1, but today I expect almost all apps to be at least of category 2. This reduces the chance of large-scale unnoticed eaves-dropping (that is still possible with e-mail for example), but it is obviously not enough, as service providers could be are evil or can be forced to cooperate with potentially evil governments or spy agencies lacking democratic control.

Therefore you really want your messenger to do end-to-end and right now the following all do (sorted by estimated size): WhatsApp, Signal, Threema, XMMP-Clients with GPG/OTR/Omemo (ChatSecure, Conversations, Kontalk).

Messengers that have a special operating mode (“secret chat” or “incognito mode”) providing (3) are Telegram and Google Allo. It is very unfortunate that it is not turned on by default so I wouldn’t recommend them. If you are forced to use one of these, always make sure to select the private mode. It should be noted that Telegram’s end-to-end encryption is viewed as less robust by experts although most experts agree that actual text-recovery attacks are not feasible.

Other popular programs like the Facebook messenger or Skype do not provide end-to-end encryption and should definitely be avoided. It is actually proven that Skype parses your messages so I won’t discuss these two any further.

Software Freedom and device integrity

Ok, so now the data is safe while traveling from you to your friend, but what about before and after it is sent? Can’t you also try to eavesdrop on the phone of the sender or the recipient before it is send / after it is received? Yes, you can and in Germany the government has already actively used “Quellen-Telekommunikationsüberwachung” (communication surveillance at the source) precisely so they can circumvent encryption.

Let us revisit the distinction of (2.) and (3.) above. The main difference between transport encryption and end-to-end is that you don’t have to trust the service provider anymore… WRONG: In almost all cases the entity running the server is the same as the entity providing you with the program so of course you must trust the program to actually do what it claims it does. Or more precisely, there must be social and technical means that provide you with sufficient certainty that the program is trustworthy. Otherwise there is little gained from end-to-end encryption.


Software Freedom

This is where Software Freedom comes into the picture. If the source code is public there are going to be lot’s of hackers and volunteers that check whether the program actually encrypts the content. While even this public scrutiny cannot give you 100% security it is widely recognized as the best process to ensure that a program is generally secure and security problems become known (and then also fixed). Software Freedom also enables unofficial or competing implementations of the Messenger app that are still compatible; so if there are certain things that you don’t like or mistrust about the official app you can chose another one and still chat with your friends.

Some companies like Threema that don’t provide you their source of course claim that it is not required for trust. They say that they had their source code audited by some other company (which they usually paid to do this), but if you don’t trust the original company, why would you trust someone contracted by them? More importantly, how do you know the version checked by the third party actually is the same as the version installed on your phone? [you get updates quite often or not?]


This is also true for governmental or public entities that do these kind of audits. Depending on your threat model or your assumptions about society you might be inclined to trust public institutions more than private institutions (or the other way around), but if you look at e.g. Germany, with the TÜV there is actually one organisation that checks both the trust-worthiness of messenger apps and whether cars produce the correct amount of pollution. And we all know how well that went!


So when deciding on trusting a party, you need to consider:

  1. benevolence: the party doesn’t want to compromise your privacy and/or is itself affected
  2. competence: the party is technically capable of protecting your privacy and identifying/fixing issues
  3. integrity: the party cannot be bought, bribed or infiltrated by secret services or other malicious third parties

After the Snowden revelations it should be very obvious that the public is the only party that can collectively fulfill these requirements so the public availability of the source code is absolutely crucial. This rules out WhatsApp, Google Allo and Threema.

“Wait a minute… but are there no other ways to check that the data in motion is actually encrypted?” Ah, of course, there are, as Threema will point out, or other people for WhatsApp. But the important part is that the service provider controls the app on your device, so they can listen in before encryption/after decryption or just “steal” your decryption keys. “I don’t believe X would do such a thing” Please keep in mind that even if you trust Facebook or Google (which you shouldn’t), can you trust them to not comply with court orders? If yes, why did you want end-to-end encryption in the first place? “Wouldn’t someone notice?”  Hard to say; if they always did this, you might be able to recognize it from analyzing the app. But maybe they do this:

if (suspectList.contains(userID))

So not everyone is affected and the behaviour is never exhibited in “lab conditions”. Or the generation of your key is manipulated so that it is less random, follows a pattern that is more easily cracked. There are multiple angles to this, most of whom could easily be deployed in a later update or hidden within other features. Note also that being “on the list” is quite easy, current NSA regulations make sure that more than 25,000 people can be added for each “seed” suspect.

In light of this it is very bad that Open Whisper Systems and Moxie Marlinspike (the aforementioned famous author of Signal) publicly praise Facebook and Google, thereby increasing trust in their apps [although it is not bad per se that they helped add the crypto of course]. I am fairly sure they cannot rule out any of the above, because they have not seen the full source code to the apps, nor do they know what future updates will contain — nor would we want to have to rely on them for that matter!

The Signal Messenger

“Ok, I got it. I will use Free and Open Source Software. Like the original Signal.” Now it becomes tricky. While the source code of the Signal client software is free/open/libre it requires other non-free/closed/proprietary components to run. These pieces are not essential to the functionality, but they (a) leak some meta data to Google (more on metadata later) and (b) compromise the integrity of your device.

The last part means that even if a small part of your application is not trustworthy, then the rest isn’t either. This is even more severe for components running with system privileges as they can basically do anything at all with your phone. And it is “especially impossible” to trust non-free components that regularly send/receive data to/from other computers like these google services. Now it is true that these components are already included in most of the Android phones used in the world and it is also true that there are very few devices that actually run entirely free of non-free components, so from my point of view it is not problematic per se to make use of them when available. But to mandate their use means to exclude people who require a higher level of security (even if available!); who use alternative more secure versions of Android like CopperheadOS; or who just happen to have a phone without these Google Services (especially common in developing countries). Ultimately Signal creates a “network effect” that discourages improving the overall trustworthiness of the mobile device, because it punishes users who do so. This undermines many of the promises its authors gives.

To make matters worse: OpenWhisperSystems not only don’t support fully free systems, but have threatened to take legal and technical action to prevent independent developers from offering a modified version of the Signal client app which would work without the non-free Google components and could still interact with other Signal users ([1] [2] [3]). Because of this independent projects like LibreSignal are now stalled. Very much in contrast to their Free Software license, they oppose any clients to the Signal network not distributed by them. In this regard the Signal app is less usable and less trustworthy than e.g. Telegram which encourages independent clients to their servers and has fully free versions.

Just so that there is no wrong impression here: I don’t believe in some kind of conspiracy between Google and Moxie Marlinspike, and I thank him for making their positions clear in a friendly manner (at least in the last statements), but I do think that the aggressive protection of their brand and their insistence on controlling all client software to their network are damaging the overall struggle for trustworthy communication.

(De-)centrality, vendor control and metadata

An important part of a communication network is its topology, i.e. they way the network is structured. As can be seen in the picture above there are different approaches that are (more or less) widely used. So while the last section dealt with what is happening on your phone, this one will discuss what is happening on the servers and which role they play. It is important to note that even in centralized networks some communication might still be peer-to-peer (not going through the center), but the distinction is that they require central servers to operate.

Centralized networks

Centralized networks are the most common, i.e. all of the aforementioned apps (WhatsApp, Telegram, Signal, Threema, Allo) are based on centralized networks. While a lot internet services used to be decentral, like E-Mail or the World Wide Web, the last years have seen many centralized services appear. One could for example say that Facebook is a centralized service built on the originally decentralized WWW structure.

Usually centralized networks are part of a bigger brand or product that is marketed together as one solution (in our case to the issue of texting/SMS). For companies selling/offering these solutions it has the advantage that it has full control over the entire system and can change it rather quickly, pushing new functionality to/on all users.

Even if we assume that the service has end-to-end encryption and even if there is a client app that is Free Software, the following problems remain:

  1. metadata: your messages’ content is encrypted, but the who-when-where information is still readable by the service provider
  2. denial of service: you may be blocked from using the service by either the service provider or by your government

There is also the more general problem that a privately run centralized service can decide which features to add independently of whether its users actually consider them features or maybe “anti-features”, e.g. telling other users whether you are “online” or not. Some of these could be removed from the app on your phone if it actually is Free Software, but some are tied to centralized structure. I might write more on this in a separate article some time.


As explained above, metadata is all data, that is not the content of your message. You might think that this data is unimportant data, but recent studies show that the opposite is true. Metadata includes: when you are “online” / if your phone has internet; the time of your messages and who you are texting with; a rough estimate on the length of the messages; your IP-address which can reveal rather accurately where you currently are (at work, or at home, out of town et cetera); possibly also security related information about your device (which operating system, which phone model…). This information has a lot of privacy threatening value and the US secret services actually use it to justify targeted killings (see above)!!

The amount of metadata a centralized service sees depends on the exact implementation, e.g. the “group chat” feature in Signal and supposedly also Threema is client-based so in theory the server knows nothing about the groups. On the other hand the server has timestamps from your communication and can likely correlate these. Again it is important to note that while your service provider may not log this information by default (some information must be retained, some could be deleted immediately), it might be forced to log more data by secret agencies. Signal (as mentioned before) only works in conjunction with some non-free components from Google or Apple who then always get some of your metadata, including your IP-address (and thus physical position) and the time you receive messages.

More information on metadata here and here.

Denial of service

Another major drawback of centralized services, is that they can decide not to serve you at all if they don’t want to or are obliged not to by law. Since many of the services require your phone number to register and they operate from the US, they might deny you service if you are a Cuban for example. This is especially important since we are dealing with encryption that is highly regulated in the US.

As part of Anti-Terrorism measures Germany has just introduced a new law that requires registering your ID when getting a SIM card, even prepaid. While I don’t think that it is likely, it does open the possibility for black-listing people and pressuring companies to exclude them from service.

Instead of working with the companies, a malicious government can of course also target the service directly. Operating from a few central servers makes the infrastructure much more vulnerable to being blocked nationwide. This has been reported for Signal and Telegram in China.

Disconnected Networks

When the server source code of a service provider is Free and Open Source software, you can setup your own service if you distrust the provider. This seems like a big advantage and is argued by Moxie Marlinspike as such:

Where as before you could switch hosts, or even decide to run your own server, now users are simply switching entire networks. [...] If a centralized provider with an open source infrastructure ever makes horrible changes, those that disagree have the software they need to run their own alternative instead.

And of course this is better than not having the possibility to roll your own, but the inherent value of a “social” network comes from the people who use it and it is not easy to switch if you loose the connection to your friends. This is why alternatives to Facebook have such a hard time. Even if they were better in every aspect, they just don’t have your friends.

Yes, it is easier for mobile apps that identify people via phone number, because it means you at least quickly find your friends on a new network, but for every non-technical person it is really confusing to keep 5 different apps around just so that they can keep in touch with most of their friends so switching networks should always be the ultima ratio.

Note that while OpenWhisperSystems claim that they are of this category, in reality they only publish parts of the Signal server source code so you are not able setup a server that has the same functionality (more precisely the telephoning part is missing).


Federation is a concept which solves the aforementioned problem by having the service providers speak with each other, as well. So you can change the provider and possibly the app you are using, but you will still be able to communicate with people registered on the old server. E-Mail is a typical example of a federated system: it doesn’t matter whether you are or or even, all people are able to reach all other people. Imagine how ridiculous it would be, if you could only reach people on your own provider!?

The drawback from a developer’s and/or company’s perspective is that you have to publicly define the communication protocols and that because the standardization process can be complicated and lengthy you are less flexible in changing the whole system. I do concur that it makes it more difficult for good features to quickly be available for most people, but as mentioned previously, I think that from a privacy and security point of view it is clearly a feature, because it involves more people and weakens the possibility of the provider pushing unwanted features on the users; and most importantly because there is no more lock-in-effect. As a bonus these kind of networks quickly produce different software implementations, both for the software running at the end-user and for the software running on the servers. This makes the system more robust against attacks and ensures that weaknesses/bugs in one piece of software don’t effect the entire system.

And, of course, as previously mentioned, the metadata is spread between different providers (which makes it harder to track all users at once) and you get to choose which of them gets yours or whether you want to operate your own. Plus, it becomes very difficult to block all providers, and you could switch in case one discriminates against you (see “Denial of Service” above).

As a sidenote: It should be mentioned that federation does imply that some metadata will be seen by, both, your service provider and your peer’s service provider. In the case of e-mail this is quite a lot, but this is not required by federation per se, i.e. a well designed federated system could avoid sharing almost all metadata in-between two service providers — other than the fact that there is a user account with a certain ID on that server.

So, is there such a system for instant messaging / texting? Yes, there is, it is called XMPP. While originally not containing strong encryption, there is now encryption that provides the same level as security as the Signal Protocol. There are also great mobile apps for Android (“Conversations”) and iOS (“ChatSecure”) and every other platform in the world, as well.

The drawback? Like e-mail, you need to setup an account somewhere and there is no automatic association with telephone numbers so you need not only convince your friends to use this fancy new program, but also manually find out which provider and username they have chosen. The independence of the phone number system might be seen as a feature by some, but as a replacement for SMS this seems unfit.

The solution: Kontalk, a messenger based on XMPP that still does automatic contact discovery via phone numbers from your address book. Unfortunately it is not yet as mature as other mentioned applications, i.e. it currently still lacks group chats and there is no support for iOS. But Kontalk does prove that it is viable to have the same features built on XMPP that you have come to expect from applications like WhatsApp or Telegram. So from my point of view it is only a matter of time until these federated solutions are on feature parity and similar in usability. Some agree with this point of view, some don’t.

Peer-to-Peer networks

Peer-to-peer networks completely eliminate the server and thereby all metadata at centralized locations. This kind of network is unbeatable from a privacy and freedom perspective and it is also almost impossible to block by an authority. An example of a peer-to-peer application is ToX, another one is Ricochet (edit: not for mobile) and there is still under development Briar which also adds anonymity so even your peer doesn’t know your IP address. Unfortunately there are principle issues on mobile devices making it hard to maintain the many connections required for these networks. Additionally it seems impossible right now to do a phone-number to user mapping so there can be no automatic contact discovery.

While I don’t currently see the possibility of these kind of apps stealing market share from WhatsApp, there are use cases, especially when you are being actively targeted by surveillance and/or you have group of people who collectively decide to move to such an app for their communication, e.g. political organisations.


  • Privacy is getting more and more attention and people are actively looking to protect themselves better.
  • It can be considered positive that major software providers feel they have to react to this, adding encryption to their software; and who knows, maybe it does make life for the NSA a little bit harder.
  • However there is no reason that we should trust them anymore than we have previously, as there is no way for us to know what their apps actually do, and there remain many ways they can spy on us.
  • If you are currently using WhatsApp, Skype, Threema or Allo and expect a similar experience you might consider switching to Telegram or Signal. They are better than the previously mentioned (in different ways), but they are far from perfect, as I have shown. We need federation in the medium to long-term.
  • Even if they seem to be nice people and very skilled hackers, we cannot trust OpenWhisperSystems to deliver us from surveillance as they are blind to certain issues and not very open to cooperation with the community.
  • Some cool things are cooking in XMPP-land, keep an eye out for Conversations, ChatSecure and Kontalk. If you can, support them with coding skills, donations or friendly e-mails.
  • If you want a zero-metadata approach and/or anonymity, try out ToX or wait for Briar.

YunoHost 2.4 is available

Marcus's Blog | 15:27, Tuesday, 31 May 2016

YunoHost is a great solution for self-hosting. It can be easily installed on a Debian Stable base.

I have started using YunoHost with version 2.2. One year after this release the YunoHost team has recently announced the new version 2.4.

YunoHost Home Panel

The upgrade procedure is described very well and I was able to apply the upgrades in a matter of minutes.

If you have not heard about YunoHost yet, I will briefly line out the core features:

  • easy to install self-hosting solution
  • large collection of applications to host  (owncloud, wallabag, ttrss, roundcube, dokuwiki, wordpress …)
  • Single sign-on
  • maintenance and upgrade capabilities via Web UI or CLI
  • (optional) integrated DynDNS service
  • integrated Backup (new in 2.4)
  • option to install custom web applications including SQL DB
  • friendly community forum

I am using a Shuttle box with Intel Atom CPU and it works very well. I have once added a Samsung SSD so the mini server does not contain any rotating parts and runs silently.

I have set up Let’s Encrypt Certificates by following this well written tutorial. In future versions of YunoHost this might as well be integrated into the main system, so one could easily activate Let’s Encrypt from within the Web UI or via CLI.

As YunoHost is multiuser capable, I was able to provide accounts for other family members as well.

If you have any questions about the setup feel free to contact me or to make yourself comfortable at the YunoHost Forum

Bootstrapping freedom and openness

free software - Bits of Freedom | 08:24, Tuesday, 31 May 2016

Bootstrapping freedom and openness

Question: Is it legitimate to use proprietary software to further free and open source software?

Through this article, I will try to approach this question by taking a look at the history of free software and open educational resources, relating this to what I will refer to as the process of bootstrapping freedom and openness.

The history of open educational resources is a useful starting point, as it is the most known. You may not have thought about it, and I certainly did not pay much attention to it. But the fact of the matter is that most of us have learned to read and write, do maths and science, with non-open educational resources.

One of my favorite books in school was Alpha - Mathematics Handbook by Lennart Råde. In its barely 200 pages it covers the fundamentals of mathematics: set theory, algebra, number theory, the details of geometry and trigonometry, functions and even has a section on programming and numerical analysis, together with probability and statistics. It's God's gift to any budding science geek.

All rights reserved. This book, or parts of it, may not be reproduced in any form without written permission of the publisher.

It's not an open educational resource. The knowledge it contains is certainly available: I've used it many times, but the actual resource is not. Which is a shame, because it's tremendously useful.

But in a twist of turns, the knowledge that this book has given me, I've been able to use successfully in creating open educational resources. I would even hazard a guess by saying that as much as 100% of the open educational resources which exist today has some basis in non-open educational resources.

I don't think we're yet at the point where someone can grow up without ever touching non-open educational resources. We're just not there yet. We're still bootstrapping.

The portion of open educational resources over non-open resources have certainly increased, and it increases by the day. It's not a slow shift. It's shifting fast. But the amount of work it takes to actually get open educational resources into all walks of life is simply so significant that it'll take us a while to get there, even if we're caravanning down the track faster than we ever have before.

In free software, the situation was much the same, and it's a story which has repeated itself several times over the last 20-30 years. Writing a free software compiler for a new computer may involve using a proprietary compiler the first times you do it. Eventually, the free software compiler reaches a stage where it can compile itself, but it's only from that point on you can truly use only free software.

Reality is of course a bit more complicated, and bootstrapping a compiler often involves several different steps of increasing complexity and sophistication. You can also try compiling the software by hand, use a cross-compiler, or any number of other tricks, each with their own pros and cons.

When you're in the bootstrapping phase, you often need to make tough decisions. You can take the long road, or the shortcut. Which one you choose depend on a lot of aspects, but both will ultimately get you to where you're going.

If you want to go to the top of the Eiffel tower to join the meeting of the "No Elevators in My Back Yard" group, you can make a decision if you want to take the stairs or the elevator. You may be inclined to take the stairs, but you'd be sacrificing some more of your time if you do, and the meeting may be over when you get there. Or you take the elevator, sacrificing your philosophy of "No Elevators", but are able to join the meeting and eventually succeed in lobbying for an elevator liberation legislation across all of Europe.

The analogy can only stretch as far, but the point as you may imagine is that you need to be very aware of what your goal of any activity is, and act according to what makes you most efficient in reaching that goal.

On a macro level, free and open source software is still bootstrapping. We face proprietary software almost on a daily basis, even as the number of free and open source software packages increase. There's still work to do.

Question: Is it legitimate to use proprietary software to further free and open source software?

If that is indeed your goal, then yes. The reason this sometimes becomes problematic is that we're lazy. There are situations where we sometimes end up using proprietary software for no good reason: we should be watchful of such situations and try to replace what proprietary software we have with free and open source software.

Over time, the portion of free and open source software will grow, and the use of proprietary software will diminish into nothing. Our actions in the end will not be judged on whether we used 20% or 3% proprietary software to get there, but whether it took us 5 or 50 years to get there.

Monday, 30 May 2016

FSFE summit: About Logo and PR (update)

English Planet – Dreierlei | 12:19, Monday, 30 May 2016

<figure class="wp-caption alignright" id="attachment_1463" style="width: 300px">FSFE summit<figcaption class="wp-caption-text">Logo of #FSFEsummit (get the source)</figcaption></figure>Around four weeks have passed since the start of the Call for Participation for this year’s and first of its kind FSFE summit. And due to lack of time, we first started the call and later made up our minds about appropriate PR to use for it. On the other hand, this way we have been able to try out some different concepts and designs, slogans and appetizers in the past weeks.

update: Thanks to Elio Qoshi, our engaged designer, who updated our logo in the meantime to be more balanced and modern. You can see the newest version on top of this blogpost.

Fortunately, in the meantime we are happy that Elio Qoshi, an experienced designer for different Free Software projects, designed a logo for the summit as well as a first slide to use for social media. Even more we are happy that he will go on doing more designs for the FSFE summit in the upcoming weeks.

This blogpost shall shed some light on the PR and media we used so far for the FSFEsummit and (partly) keep on using. In the corner on top right you see the logo by Elio Qoshi that we use since mid of May and that we will keep using. Obviously, it is inspired by the FSFE logo and therewith makes a perfect fit for all upcoming publications for the FSFE summit.
Next is the initial slide by Elio Qoshi, CC-BY-SA 4.0


The quote on this slide is taken out of the introduction for the Call for Participation, which in turn has been established as resource number one for all written communication in the last weeks:

Imagine a European Union that builds its IT infrastructure on Free Software. Imagine European Member States that exchange information in Open Standards and share their software. Imagine municipalities and city councils that benefit from decentralized and collaborative software under free licenses. Imagine no European is any longer forced to use non-Free Software.

The Hashtag we use is: #FSFEsummit

Below here you find more material and pictures in chronological order, that I came up with my not-existing design talents : )

<figure class="wp-caption aligncenter" id="attachment_1371" style="width: 300px">Author: Erik Albers License: CC0 Based on original source by geralt<figcaption class="wp-caption-text">Author: Erik Albers
License: CC0
Based on original source by geralt</figcaption></figure>

<figure class="wp-caption aligncenter" id="attachment_1372" style="width: 300px">Author: Erik Albers License: CC0 Based on original source by unsplash<figcaption class="wp-caption-text">Author: Erik Albers
License: CC0
Based on original source by unsplash</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1370" style="width: 238px">Author: Erik Albers License: CC-BY-SA 2.0 Based on original source by Eva Rinaldi<figcaption class="wp-caption-text">Author: Erik Albers
License: CC-BY-SA 2.0
Based on original source by Eva Rinaldi</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1373" style="width: 300px">Author: Erik Albers License: CC0 Based on own work<figcaption class="wp-caption-text">Author: Erik Albers
License: CC0
Based on own work</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1374" style="width: 300px">Author: Erik Albers License: CC0 Based on original source by blickpixel<figcaption class="wp-caption-text">Author: Erik Albers
License: CC0
Based on original source by blickpixel</figcaption></figure>

The State of Free Software

free software - Bits of Freedom | 08:53, Monday, 30 May 2016

The State of Free Software

[This blog post is in part taken from a transcript of my presentation at FOSS North 2016 and should be read in that context, some of it relating back to previous postings on this blog]

A critical look at our progress

Allow me to take you back in time. Not very far, but to 2015. This was the year when as many as 78% of companies surveyed by Black Duck Software reported running parts or all of its operations on free and open source software. That's a tremendous penetration of free and open source software. It's also easy to be blinded by such a number.

The same year, 80-85% of all software on Github was proprietary, or at least lacking any sort of indication of being free and open. The number of proprietary repositories on Github, and elsewhere, is increasing.

We're facing a future where more and more companies use and develop free and open source software, while the global portion of free and open source software keeps getting smaller.

I'm not going to lie to you: this is all statistics. And of course, as such, it is at fault: depending on what I wanted to present to you, I could cherry-pick the numbers I felt relevant to my point. And if I polled a group of IT professionals asking them whether they or someone they know is using free and open source software, I'd probably get an astonishingly high number and be very happy about that.

But that number would mean very little about the prevalence of free and open source worldwide: this is the majority illusion. Current discourse tend to favour David over Goliath, leading to the impression that everyone's a David, even if most of us would be sporting a Goliath costume. It's a bit like teenagers and alcohol: everyone thinks everyone else is doing it. And so it is with free and open source software too. Everyone else must be doing it.

And we are in a situation where companies like Sony has been using free and open source software in their TVs since 2003. The Swedish electronics retailer Netonnet has 23 TV models from Sony. 20 of them, 87%, run free and open source software. If you walk into an electronics retailer to buy a TV, there's a very high chance of you walking out of there with free and open source software.

What I'm ignoring in this, to make the point, is of course that a lot of the TVs you walk out of Netonnet with may contain free and open source software, but they also contain a whole bunch of other things. It almost seems as if a lot of electronics you buy today will have the warning somewhere in its documentation:

WARNING, may contain trace elements of free and open source software.

This device's firmware has been compiled with a compiler which also compiles free and open source software.

Free and open source software is, indeed, everywhere. But so is proprietary software. We shall not downplay the success of free and open source software, but let's be realistic: there's a lot more we can and should be doing.

Following the supply chain

Over the past number of years, we've seen the rise of the term supply chain compliance. This isn't specific to software: far from it. Think H&M, Gap or IKEA sourcing clothes and furniture from sweatshops using child labour in Uzbekistan, Bangladesh and Turkey. Supply chain compliance is about managing environmental, legal and social risks of raw material, be they clothes or furniture or software.

Software is the raw material on which we build our future. Marc Andreessen has been talking about how software is eating the world; Larry Lessig was talking many years ago about how code is law. Software is all around us, it influences and controls.

And for this, and many other reasons, if I were to put free and open source software into a TV to be sold to millions of homes, I'd like to have some idea about where that software actually came from: to understand the security and licensing implications. So we follow the supply chain of our TV, from hardware, to software. We follow it from Netonnet to Sony's manufacturing plant in Slovakia, back to Sony's headquarters and r&d facility in Japan, to it's parts suppliers in China and Taiwan.

And we can follow the software components back to where they come from. A lot of times, for embedded, devices, this takes us back to Sweden where Daniel Stenberg is developing curl.

If we are to, as it happens, “manage the environmental, legal and social risks of raw material”, we need to know something about it. We need to know where it comes from. And so, tightly connected with the idea of supply chain compliance, is the idea of software provenance. A term borrowed from the artistic field to mean the “chronology of ownership, custody and location of an object,” in our case: software.

In May 2015, the Software Heritage project set out to archive and index all known free and open source software to date: owing to the understanding that software is also a part of our cultural heritage. Last month, they had indexed about 2,2 billion unique files: from Debian, GitHub and the GNU project. Other build similar archives, for various purposes, but the common denominator is: if we want to know where something comes from, we need to have a database of information to which we can match the source code we're looking for.

The Linux Foundation's working group on SPDX (Software Package Data Exchange) also contribute their piece to the puzzle: a specification for a common language for conveying license and component metadata for software packages.

We're facing an increased awareness of the need for provenance information, not only in software, but also for any type of digital work. The European Commission has signaled that it, from its side, is highly interested in the idea of registries for digital works. And while the motivation to this is sometimes a bit sinister: to be able to detect and punish copyright infringement, the need is no less there.

If we find a piece of software on the Internet, or a snippet of music, or a photograph (think about the work I did for a while back), and we want to know where it comes from, we currently do not have the technology in place which allow us to do that, for any purpose, with any reasonable success. And if music, photographs, software, is all part of our cultural heritage, not having this information means we're missing out.

And as everyone and their grandma' is pushing for compliance, sometimes to the point where one could wonder if the scaremongering of free software licensing compliance from our own community isn't just as bad as the FUD (fear, uncertainty and doubt) spread by software freedom antagonists.

If you're a single developer, listening to the FUD of the antagonists, and the scaremongering of the protagonists, it's perhaps not so strange to imagine you may do just what James Governor expressed in 2012: fuck the license and governance, just commit to github.

It's a strange irony in that licenses, registries and metadata standards, which are built with the aim to simplify, can be seen as obstructive to development. But for the two sides to meet, the whole process must be as seamless as creating a repository on Github.

We need automation and tooling for licensing and provenance information, and we need it now.

Support structures for free software

In the last half year, no less than three separate initiatives have sprung up in Europe focusing on providing support services and fiscal sponsorship to free and open source software projects. The basic premise is that administration is boring, but someone needs to do it. There are organisations today, such as Software in the Public Interest (SPI), Software Freedom Conservancy (SFC) and the Apache Software Foundation (ASF) who provide a legal home and some services to free software projects.

Independent of each other, at least three different groups have felt the need to do similar work in Europe, and they're all doing it rather differently. The FSFE has a role as an advisor for some of these, and I'll tell you briefly about them.

In the UK, there is some excitement around the possibilities of a relatively recent organisational form called a Community Interest Company. What's special about such entities is that they are for-profit companies with a social objective: each CIC designates in its bylaws the specific community it is supposed to serve, such as the free software community.

Any income generated from its activity may only be used to serve that community, and any assets held: copyright or otherwise, is locked to the specific community. Once a CIC becomes the legal holder of copyright for some software, or funding related to a project, those assets are locked to serve the free software community and may not by law be given elsewhere.

One CIC is in the formative stages and will soon be able to provide some basic services for each of its member projects, but it will also make additional service available as there is a need for them. If a member project wants to register a trademark, as one example, the CIC will help broker the connections and will have service providers for most common functions, but the project will bear the full cost of the registration.

Across the channel, in the Netherlands, is the Dutch foundation Nlnet and it is from their offices that The Commons Conservancy comes. The Commons Conservancy is an organisational hypervisor: the idea is that member projects know themselves what governance structure is most suitable to them, how they should collect and spend money, and so on. The hypervisor does not engage in the day to day operations of the project in any way.

Should the project fail in adhering to its own principles, the hypervisor can step in with some corrective action. Ultimately though, the hypervisor will stay out of the way. If the project decides that it wants to have a governance board consisting of representatives of the world's Fortune 500 companies, that's just as valid as having a benevolent dictator for life. What the hypervisor cares about is that any decision regarding the direction has been taken in a way which is consistent with the guidelines and processes already established by the project.

Each of these ideas, a CIC and the Commons Conservancy, as well as others, will serve a need for some projects to have an organisational home. Their challenge is to avoid being the graveyard of projects which failed to gain traction. But in terms of new developments, this is something that's bound to come up even more during the rest of the year.

The opportunity of a single market

An opportunity which lie ahead of us is the Digital Single Market (DSM), a strategy aimed at bringing down regulatory barriers between the 28 different national markets.

According to the Commission, a true Digital Single Market (DSM) can be achieved by taking the following actions, and I quote:

  • Digitalising industry to a smart industrial system so that all industrial sectors should be able to integrate new technologies.
  • Developing standards and interoperability in areas such as the "Internet of Things", cybersecurity, big data, and "cloud".
  • Launching initiatives on the "free flow of data" and a European Open Science Cloud as a catalyst for growth, innovation and digitisation.
  • Unlocking the benefits of high-quality e-services and advancing digital skills.

This is a significant investment from the Commission in terms of time, political clout and actual resources. The Research Executive Agency, which is funding research based in part on the priorities of the European Commission and Parliament, is investing a lot of hard cash into technologies that can be seen as meeting parts of those actions.

A lot of political initiatives that relate free software is also part of the DSM: the different implementations of copyright law in different states has been seen as an obstacle to a single market and the legislative proposals for copyright reform that are being discussed should be seen in the light of bringing down regulatory barriers for a single market.

The DSM also highlight the importance of legal certainty for text and data mining, but doesn't offer any concrete ideas for the direction of this. But any legislation which deal with text and data mining should be seen in the light of a single market. It's not about text and data mining per se, but about bringing down regulatory barriers.

But this is the opportunity: by anchoring our ideas and activities to the DSM, we can gain some traction for those activities, and it opens up some doors that would otherwise be closed. The DSM is one of the Commission's children. As any parent, they love to talk about it.

Legal developments in free software

A few months ago, I had the pleasure listening to a legal expert talk about a subject with a waiver saying: I'm not even sure I agree with what I'm saying. Mind you, this is specific to US copyright law, but it's non the less interesting to reflect on.

US copyright law defines a copyrightable work as something which has been fixed at any particular time. Where a work has been prepared in different versions, each version constitutes a separate Work.

However, case law in the US dictate that separate copyrights are not separate Works UNLESS each Work has an independent economic value in itself.

In free and open source software, we have the concept of a derivative work, where each addition or change from a contributor creates a new Work. Each derivative forms a new Work and each derivative of this forms a new Work and so on. US case law, again, however, say that a derivative work is what occurs when an independent expression to an existing work effectively creates a new work for a different market.

Indeed, even Google argued last year in a case that a definition of a derivative work that implies the creation of a new work based on a very small level of creativity or originality would “make swiss cheese of copyrights.”

We haven't seen this in software, in this way, but we know it from the film industry. There has been cases where actors have agreed to portray a character in a film, only to have the producer dub over their voices to make their characters say and act completely different from what the actor intended and agreed to.

Courts have ruled that the individual contributions of the actors can not have an independent value in itself, and as such, they do not enjoy copyright protection as separate works.

In LLC v. Merkin in 2015, the 2nd circuit court ruled that:

May a contributor to a creative work whose contributions are inseparable from, and integrated into, the work maintain a copyright interest in his or her contributions alone? We conclude that, at least on the facts of the present case, he or she may not.

In cases in which none of the multiple-author scenarios specifically identified by the Copyright Act applies, but multiple individuals lay claim to the copyright in a single work, the dispositive inquiry is which of the putative authors is the “dominant author.”

[...] Deciding who is the “dominant author” is includes the parties' relative control over what changes are made and what is included in a work (the Decision-making authority)

You can imagine what this can mean, if you make this claim about the Linux kernel: if Linus Torvalds is the decision-making authority, deciding what changes are made and what is included in the Linux kernel, then by some arguments, Linus could be seen as the dominant author and thus the single copyright owner of the work.

It's worth pointing out this is at best speculative, and may only apply, even if it were to apply, to US copyright law. But it's an interesting reflection on copyright law and the complexities thereof as it relates to software.

The case of centralisation

Moxie Marlinspike and his Open Whisper System has met some criticism over their Signal product. Signal is a tool for secure messaging between mobile devices. The criticism is that Signal is built on a centralised platform. This was fueled even further by an idea that LibreSignal, an independent build of Signal, would not be able to federate and talk to the Signal servers.

In a response to this critique, Moxie wrote about how he feels that innovation can not happen as quickly and easily as needs be with federated and decentralised structures. To prove his point, he argued that the premise that the internet could not have gotten to where it is without interoperable and federated protocols is false:

We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. That's how far the internet got. It got to the late 90s. ”

I don't necessarily agree with Moxie, and I think you may not either. But Moxie has a point in that federated structures are more difficult to achieve. Historically, they only develop after considerable investment into non-federated structures: think telephones, trainlines, electricity grids, water & waste management, and so on.

The Internet of the late 90s was built on knowledge of the success and failures of thousands of non-federated networks around the world. There's no reason to think this will not happen again, and that encrypted personal communication is not an area where such a change is ripe to happen.

But we do not always have direct control over the twists and turns of technological progress. And I do say progress, with the implied meaning that a free and open source software federated structure for encrypted communication would be a progress beyond what we have today.

Until we have that in place, we must do what we can, in each of our ways, to work politically, socially and technologically for free and open source software. And sometimes, as it relates to technologies we all use and depend on every day, we may find it useful to remind people, with a tongue in cheek manner, that There is No Cloud – Only other people's computer.

Sunday, 29 May 2016

Beosound 9000 – IR receiver repair

Handhelds, Linux and Heroes | 23:00, Sunday, 29 May 2016

I really like the design of most classic Bang & Olufsen stuff – but my favorite one is the Beosound 9000. Since I pretty much like to understand how all these technical devices we are surrounded with work, I usually take apart more or less everything in order to find out how it works and how to fix it… I decided to buy one to fix instead of getting a working one I might break. I did some “training” with other classic CD players before I bought an old and not working Beosound 9000 in order to see if I can fix it. I really admire the design, bit it did not take long till I started to admire the engineers even more… it must have been quite a tough job not only to make it work at all bit to make it possible to produce it in quantities.

First of all: There is a service manual for it which helps a lot with common issues and how to disassemble the device without breaking anything. B&O generally does a good job releasing service manuals for its devices. One important information from the manual is that without speakers or headphones connected the 9000 enters ‘mode 0’ which means that it does not take any input from the remote control. So I connected speakers and tried to enter service mode and change audio mode without success… from some forum posts I learned that the IR receiver PCB14 fails in some cases. Since I did not what to find a replacement I tried to fix it… and I had a lot of fun:

First I found out that PCB14 really does not supply any data to the controller board – it is connected with just three pins (ground, +5V and data) and there was no traffic on the data line at all. Unluckily the service manual does not contain a schematic of PCB14. So I started to find out how it is supposed to work… the receiver is built around a Vishay U2506B IR receiver – so I thought it would be easy to find out more using the data sheet of that one but even Vishay and its distributors do not seem to have one.  In the end I used a broken Beolink 5000 two-way infrared remote control to find out about the signals on the PCB. It uses a similar design and the same controller.

It turned out that it was only a single broken capacitor – it is marked in the image below.


This shows the PCB with the replacement capacitor. It is a good idea to use a mechanically smaller replacement than I did because a long capacitor case has some potential to hit the CD drive sledge when entering position 6.

After replacing the capacitor the IR receiver started work again. It was necessary to change to mode 1 manually before I was able to enter service mode. The easiest way is to use a Beolink 1000 since it has a ‘Sound’ button for this purpose. Just press <Sound> <1> <Store>.

I have some more information about repairing this magnificent Beosound 9000… but that’s something for the next post.

Note to my readers: Please remind me to blog a little bit more frequently :-)

Friday, 27 May 2016

Thank you for FOSS North 2016!

free software - Bits of Freedom | 14:16, Friday, 27 May 2016

Thank you for FOSS North 2016!

This week, I've had the pleasure of being in Gothenburg for the first annual FOSS North 2016 conference. Johan Thelin, who was one of the main organisers, was already speaking about a repeat performance next year which I'm very much looking forward to.

The event was visited by just over 100 people most of which came from the area and worked with or had a strong personal interest in free and open source software. My principal contribution of the day was a talk in the afternoon about the state of free software in Europe (and elsewhere), which I will post more about on Monday.

Other speakers included FSFE's vice president Alessandro Rubini who held a very appreciated talk about time and FSFE's fellowship representative Mirko Boehm who spoke about OIN. Anders Arnholm spoke about software craftmanship, we got a peek at the new logo and roadmap of the curl project from Daniel Stenberg and Alexandra Leisse spoke about the user experience of complexity. The day was filled with interesting topics and discussions which continued into the evening.

For myself, I also got a chance to for the first time in many years put up an FSFE booth at the conference, which led to a number of interesting connections and discussions with the participants and at times the FSFE table also became the gathering point for most of the participating Fellows and volunteers during the breaks. I should give a special thanks to Sebastian Hörberg who helped keep an eye on things.

From a booth participation point of view, it was also the first time I traveled with a "light" booth setup with limited merchandise and information material. In fact, aside from some pins and keychains, the only real swags I brought with me were our NoCloud t-shirts in two colors.

I also used an exhibition stand which fold into an (oversized) case which made transport extremely convenient: in one single case I carried all swag, material for the booth, a roll-up and the exhibition stand itself. And there was plenty of room to spare! It's definitely something I will do again, and I can see the FSFE participating in more events in Sweden in the future (and elsewhere; I can easily take it on a flight).

Here are a few photos of the booth setup. In the last one you can imagine how the curved surface on the outside (if you take away the FSFE banner) is actually hiding the storage case itself. You just take away the top, remove the banner, and then fold it together.

Thank you for FOSS North 2016! Thank you for FOSS North 2016! Thank you for FOSS North 2016! Thank you for FOSS North 2016!

Running a Hackerspace

nikos.roussos - opensource | 03:51, Friday, 27 May 2016

I wrote parts of this post after our last monthly assembly at Athens Hackerspace. Most of the hackerspace operators are dealing with this monthly meeting routinely and we often forget what we have achieved during the last 5 years and how many great things this physical space enabled to happen. But this post is not about our hackerspace. It's an effort to distant myself and try to write about the experience of running a hackerspace.


Yes, it's a community

The kind of people a space attracts is the kind of people it "wants" to attract. That sounds kind of odd right? How a physical space can want anything? At some point (the sooner the better) the people planning to open and run a hackerspace should realize that they shape the form of the community to occupy and utilize the space. They are already a community before even they start paying the rent. But a community is not a random group of people that just happen to be in the same place. They are driven by the same vision, common goals, similar means, etc. Physical spaces don't have a vision. A community does. And that's a common struggle and misconception that I came across so many times. You can't build a hackerspace with a random group of people. You need to build a community first. And to do so you need to define that common vision beforehand. We did that. Our community is not just the space operators. It's everyone who embraces our vision and occupies the space.

Yes, it's political

There is a guilt behind every attempt to go political. Beyond the dominant apolitic newspeak that surrounds us and the attempt to find affiliations in anything political, there is still space to define yourself. It's not necessarily disruptive. After all it's just a drop in the ocean. But this drop is an autonomous zone where a collective group deliberately constructs a continuous situation where we challenge the status quo. Being not for profit is political. Choosing to change the world one bit at a time, instead of running another seed round, is political. Going open source and re-shaping the way we code, we manufacture, we share, we produce and in the end the way we build our own means of production, is political. Don't hurry to label it. Let it be for now. But it's a choice. Many spaces have chosen otherwise, operating as tech shops or as event hosts for marketing presentations around new commercial technologies and products, or even running as for-profit companies, declaring no political intention. These choices are also political. Acceptance comes after denial.

Rules vs Principles

You'll be tempted to define many ground rules on how you want things to operate. Well, I have two pieces of advice. Never establish a rule for a problem that has not yet emerged. You'll create bigger frictions than whatever problem you are trying to solve. Always prefer principles over rules. You don't need to over-specify things. Given the trust between the people of a hackerspace there is always common sense on how a principle applies.

Consensus vs Voting

All hackerspaces should have an assembly of some form to make decisions. Try to reach consensus, through discussion and arguments. There will be cases where a controversial matter can be hard to have an unanimous decision. Objections should be backed with arguments, otherwise they should be disregarded. Voting should always be the last resort. Remember, the prospect of a voting at the end of a discussion kills many good arguments in the process. Consensus doesn't mean unanimity.


Some call it lazy consensus. If you have an idea for a project you don't need permission. Don't wait for someone else to organize things for you. Just reach out to the people you want and are interested in your idea and start hacking.

Code of conduct

You'll find many approaches here. We decided to keep it simple and most importantly to stick on a positive language. Describe what's an accepted behavior inside your community, instead of stating all behaviors you find wrong (you'll miss something). Emphasize excellence over Wheaton's Law. "Be polite to everyone. Respect all people that you meet at the space, as well as the space itself.", is what we wrote on our frontpage. It may not be stated explicitly, but any form of discrimination is not an accepted behavior. Being excellent to everyone means that you accept the fact that all people are equal. Regardless of nationality (whatever that means) or sexual orientation, you should be polite to all people.


This is my favorite word when it comes to hackerspaces. I'm sure most people reading this are familiar with Free Software and its four freedom definition. Let me remind you one of the freedoms:

The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.

Something that usually escapes the attention of many people is that the availability of source code is not the important thing here. The important thing is the the freedom to study and change. Source code availability is a prerequisite to achieve that freedom.

Same happens with hackability. Remember the Hackerspace definition as it stands on the wiki:

Hackerspaces are community-operated physical places, where people share their interest in tinkering with technology, meet and work on their projects, and learn from each other.

So again the important thing here is that you tinker/hack things. Many people have misinterpret this into thinking that since there is no mention of Open Source or Free Software in that definition then these things are not important. Again, these are the requirements. In order to hack something, you should be granted the freedom to study and change it. Access to the source code is a prerequisite for this. For those who prefer graphical representations:


Mind the "principles" next to Free Software, since we are not just talking about software here. This also applies to hardware (hack beaglebones, not makey makey), data (hack OpenStreetMap, not Google Maps), content (hack Wikipedia, not Facebook) and of course software again (teach Inkscape, not Illustrator).

Sharing your knowledge around a specific technology or tool freely is not enough. Actually this notion is often used and more often abused to justify teaching things that nobody can hack. You are a hackerspace, act like one. All the things taking place in a hackerspace, from the tiniest piece of code to the most emblematic piece of art, should by definition and by default be hackable.

Remember, do-ocracy

I hope it's obvious after this post that building and running a hackerspace is a collective effort. Find the people who share the same vision as you and build a hackerspace. Don't wait for someone else to do it. Or if you are a lucky, join an existing one that already runs by that vision. It's not that hard. After all, the only thing we want to do is change the world. How hard can it be?

Comments and reactions on Diaspora or Twitter

Thursday, 26 May 2016

Road Ahead

English – Björn Schießle's Weblog | 08:02, Thursday, 26 May 2016

Road ahead

CC BY 2.0 by Nicholas A. Tonelli

I just realized that at June, 1 it is exactly four years since I joined ownCloud Inc. That’s a perfect opportunity to look back and to tell you about some upcoming changes. I will never forget how all this get started. It was FOSDEM 2012 when I met Frank, we already knew each other from various Free Software activities. I told him that I was looking for new job opportunities and he told me about ownCloud Inc. The new company around the ownCloud initiative which he just started together with the help of others. I was directly sold to the idea of ownCloud and a few months later I was employee number six at ownCloud Inc.

This was a huge step for me. Before joining ownCloud I worked as a researcher at the University of Stuttgart, so this was the first time I was working as a full-time software engineer on a real-world project. I also didn’t write any noteworthy PHP code before. But thanks to a awesome community I got really fast into all the new stuff and could speed up my contributions. During the following years I worked on many different aspects of ownCloud, from sharing, over files versions to the deleted files app up to a complete re-design of the server-side encryption. I’m especially happy that I could contribute substantial parts to a feature called “Federated Cloud Sharing”, from my point of view one of the most important feature to move ownCloud to the next level. Today it is not only possible to share files across various ownCloud servers but also between other cloud solutions like Pydio.

But the technical part is only a small subset of the great experience I had over the last four years. Working with a great community is just amazing. It is important to note that with community I mean everyone, from co-workers and students to people who contributed great stuff to ownCloud in their spare time. We are all ownCloud, there should be no distinction! We not only worked together in a virtual environment but meet regularly in person at Hackathons, various conferences and at the annual ownCloud conference. I met many great people during this time which I can truly call friends today. I think this explains why ownCloud was never just a random job to me and why I spend substantial parts of my spare time going to conferences, giving talks or helping at booths. ownCloud combined all the important parts for me: People, Free Software, Open Standards and Innovation.

Today I have to announce that I will move on. May, 25 was my last working day at the ownCloud company. This is a goodbye and thank you to ownCloud Inc. for all the opportunities the company provided to me. But it is in no way a goodbye to all the people and to ownCloud as a project. I’m sure we will stay in contact! That’s one of many great aspects of Free Software. If it is done right a initiative is much more than any company which might be involved. Leaving a company doesn’t mean that you have to leave the people and the project behind.

Of course I will continue to work on Free Software and with great communities, especially I have no plans to leave the ownCloud community. Actually I hope that I can even re-adjust my Free Software and community focus in the future… Stay tuned.

Wednesday, 25 May 2016

Patch for the CLI password manager “pass”

things i made | 21:47, Wednesday, 25 May 2016

I use Pass ( to store and synchronize all my passwords.

When I use Pass via SSH on a remote system in order to retrieve a password, I cannot make use of it’s clipboard feature. In order to output the password without actually displaying it, I wrote the following patch which prints the password in red on a red background while still being able to be manually copied to the clipboard:

Tuesday, 24 May 2016

Is this the end of decentralisation?

fsfe - Bits of Freedom | 09:03, Tuesday, 24 May 2016

Is this the end of decentralisation?

I've been sitting on these thoughts for some time, but after not progressing in my thoughts more for a week or two, I'd love to share them with you. You may recall Moxie's blog post about how the software ecosystem is constantly moving and what this means for decentralised services.

Signal, which is developed by Moxie and Open Whisper Systems, is a tool for secure messaging between mobile devices. It has faced criticism since Signal is built on a centralised platform. The criticism was fueled even further by an idea that LibreSignal, an independent build of Signal, would not be able to federate and talk to the Signal servers.

In a response to this critique, Moxie wrote about how he feels that innovation can not happen as quickly and easily as needs be with federated and decentralised structures. To prove his point, he argued that the premise that the internet could not have gotten to where it is without interoperable and federated protocols is false.

We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. That's how far the internet got. It got to the late 90s. - Moxie Marlinspike

I would postulate that Moxie is right in his reasoning, but that his reasoning misses the larger picture. If I'm right, we're a year or a bit away from a federated structure for secure messaging. And we'd have gotten there thanks to Moxie and his work.

It all has to do with infrastructures.

Infrastructures for communication depend on having a larger user base. The more users you have signing up, the more likely it will be that someone you meet and want to communicate with is using the same communications infrastructure. Once you get a significant portion -- I would estimate some 30-50% -- of a community to use your infrastructure, it will be very difficult for the remaining 50-70% of the community to stay away from using the same infrastructure. You'll automatically attract more users by sheer necessity of communication.

If you have the right users, you can get away with significantly less than 30-50%: you can benefit from the majority illusion, but even that will only take you so far. No one can reasonably expect to develop (and control!) clients which are suitable for everyone's use, and the user base is limited by it. Open Whisper Systems is nowhere near such a user base, and there's tremendous growth potential in Signal still, but it may soon start to be difficult to see the same growth as it has to date.

Facebook, to take another popular example, doesn't have the same limit. Not because they have more resources, but because they use a communication technology (the web) which is based on the 90s technology which Moxie finds so troublesome. It's the common denominator for pretty much everyone using the web today, which is what makes it powerful. Despite mobile phones, I'd argue that Facebook became what it is due to them using a communication protocol which was not theirs, but a common standard. Had they enforced control over the clients used to connect, they would probably not have scaled in the way they did.

For communication infrastructure which should scale, we need common standards which everyone can use. For other software, which does not depend on scale, the standards aren't as important: it could be perfectly fine to have just one client for your tax software, as long as that's open source.

But this is also a matter of maturity: when a field is being established, it helps to have control over everything. As it grows, open standards and decentralisation become more important and people start to expect it to scale and grow wider. Nothing is as irritating as having a client for encrypted SMS and not being able to communicate with your friend, just because she happens to use a different program.

When those expectations mount, the need for a proper response will grow, and that response will be a decentralised structure which does not depend on control over individual applications.

But where Moxie and Open Whisper Systems finds themselves today is very natural. They are where every new infrastructure starts: in the establishment phase, where several actors independently of each other work within their own communities to build similar infrastructures with limited or no need for interoperability or decentralisation.

Our train lines started out the same way, with multiple train companies establishing and building complete tracks with stations and trains. Electricity was locally sourced and under the control of the local electrical company. Telegraph lines were point to point and operated individually. As was telephone networks, where you had a massive amount of local telephone companies operating in different communities.

Until it wasn't any more. All of these infrastructures are now, to various degrees, federated. You may have multiple providers operating parts, but they are all interconnected, because the need for interoperability trumps the need for centralisation.

Moxie is right in his reasoning, and his conclusion is understandable based on where the field is today. And even if, as Moxies puts it:

... at this point it seems that it will have to do.

The ecosystem is moving, as is the environment in which it operates. Put a reminder in your calendar a year from now and revisit the situation then: at that point of time, the ecosystem will have moved, the environment around it will have moved, and I'd be greatly surprised if it hadn't inched closed to a federated structure.

Is this the end of decentralisation?

free software - Bits of Freedom | 09:03, Tuesday, 24 May 2016

Is this the end of decentralisation?

I've been sitting on these thoughts for some time, but after not progressing in my thoughts more for a week or two, I'd love to share them with you. You may recall Moxie's blog post about how the software ecosystem is constantly moving and what this means for decentralised services.

Signal, which is developed by Moxie and Open Whisper Systems, is a tool for secure messaging between mobile devices. It has faced criticism since Signal is built on a centralised platform. The criticism was fueled even further by an idea that LibreSignal, an independent build of Signal, would not be able to federate and talk to the Signal servers.

In a response to this critique, Moxie wrote about how he feels that innovation can not happen as quickly and easily as needs be with federated and decentralised structures. To prove his point, he argued that the premise that the internet could not have gotten to where it is without interoperable and federated protocols is false.

We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. That's how far the internet got. It got to the late 90s. - Moxie Marlinspike

I would postulate that Moxie is right in his reasoning, but that his reasoning misses the larger picture. If I'm right, we're a year or a bit away from a federated structure for secure messaging. And we'd have gotten there thanks to Moxie and his work.

It all has to do with infrastructures.

Infrastructures for communication depend on having a larger user base. The more users you have signing up, the more likely it will be that someone you meet and want to communicate with is using the same communications infrastructure. Once you get a significant portion -- I would estimate some 30-50% -- of a community to use your infrastructure, it will be very difficult for the remaining 50-70% of the community to stay away from using the same infrastructure. You'll automatically attract more users by sheer necessity of communication.

If you have the right users, you can get away with significantly less than 30-50%: you can benefit from the majority illusion, but even that will only take you so far. No one can reasonably expect to develop (and control!) clients which are suitable for everyone's use, and the user base is limited by it. Open Whisper Systems is nowhere near such a user base, and there's tremendous growth potential in Signal still, but it may soon start to be difficult to see the same growth as it has to date.

Facebook, to take another popular example, doesn't have the same limit. Not because they have more resources, but because they use a communication technology (the web) which is based on the 90s technology which Moxie finds so troublesome. It's the common denominator for pretty much everyone using the web today, which is what makes it powerful. Despite mobile phones, I'd argue that Facebook became what it is due to them using a communication protocol which was not theirs, but a common standard. Had they enforced control over the clients used to connect, they would probably not have scaled in the way they did.

For communication infrastructure which should scale, we need common standards which everyone can use. For other software, which does not depend on scale, the standards aren't as important: it could be perfectly fine to have just one client for your tax software, as long as that's open source.

But this is also a matter of maturity: when a field is being established, it helps to have control over everything. As it grows, open standards and decentralisation become more important and people start to expect it to scale and grow wider. Nothing is as irritating as having a client for encrypted SMS and not being able to communicate with your friend, just because she happens to use a different program.

When those expectations mount, the need for a proper response will grow, and that response will be a decentralised structure which does not depend on control over individual applications.

But where Moxie and Open Whisper Systems finds themselves today is very natural. They are where every new infrastructure starts: in the establishment phase, where several actors independently of each other work within their own communities to build similar infrastructures with limited or no need for interoperability or decentralisation.

Our train lines started out the same way, with multiple train companies establishing and building complete tracks with stations and trains. Electricity was locally sourced and under the control of the local electrical company. Telegraph lines were point to point and operated individually. As was telephone networks, where you had a massive amount of local telephone companies operating in different communities.

Until it wasn't any more. All of these infrastructures are now, to various degrees, federated. You may have multiple providers operating parts, but they are all interconnected, because the need for interoperability trumps the need for centralisation.

Moxie is right in his reasoning, and his conclusion is understandable based on where the field is today. And even if, as Moxies puts it:

... at this point it seems that it will have to do.

The ecosystem is moving, as is the environment in which it operates. Put a reminder in your calendar a year from now and revisit the situation then: at that point of time, the ecosystem will have moved, the environment around it will have moved, and I'd be greatly surprised if it hadn't inched closed to a federated structure.

Monday, 23 May 2016

PostBooks, PostgreSQL and talk - fsfe | 17:35, Monday, 23 May 2016

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at in Rapperswil, Switzerland is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English―  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog