Planet Fellowship (en)

Wednesday, 04 May 2016

How to campaign for the cause of software freedom

FLOSS – Creative Destruction & Me | 15:03, Wednesday, 04 May 2016


Super secret conspiracy workshop.

Free Software communities produce tons of great software. This software drives innovation and enables everybody to access and use computers, whether or not they can afford new hardware or commercial software. So that’s that, the benefit to society is obvious. Everybody should just get behind it and support it. Right? Well, it is not that easy. Especially when it comes to principles of individual freedom or trade-offs between self-determination and convenience, it is difficult to communicate the message in a way that it reaches and activates a wider audience. How can we explain the difference between Free Software and services available at no cost (except them spying at you) best? Campaigning for software freedom is not easy. However, it is part of the Free Software Foundation Europe’s mission. The FSFE teamed up with Peng! Collective to learn how to run influential campaigns to promote the cause of Free Software. The Peng Collective is a Berlin based group of activists who are known for their successful and quite subversive campaigns for political causes. And Endocode? Endocode is a sponsor of the Free Software Foundation Europe. We are a sponsor because free software is essential to us, both as a company and as members of society. And so here we are. 

There are some exciting, courageous and engaging campaigns that focus on communicating complex political goals. The escape helpers campaign leaves the audience conflicted between the two choices of being a good human rights activists (driven by ideals and demonstrating solidarity with refugees) and being a good citizen (by abiding the law). Great, because the message is to re-think what is legal against what is right.The #slamshell performance emotionally demonstrated the risks associated with oil drilling that are normally regarded as marginal.

These campaigns translate abstract, distant risks or worries into concrete, tangible calls to action. By being provocative, they break the mold and reach a wide audience online and through traditional media. They are “cat content for social change”, as our tutors put it. Campaigners are being urged to stop preaching or complaining, and to start using positive communication combined with subversive PR work instead. Such messaging needs punchlines, which requires some kind of hyperbole – dadaism, hijacking attention, or provocation.

Campaign development is still a pretty down to earth task. Through fact finding research and the analysis of campaign goals, supporting allies and potential opponents, answers to the four essential questions are being narrowed down: What is the change that we want to achieve? How can this change be brought about? Who can make that change we want to see? And who has power over the involved people or groups? Setting campaign goals is often a compromise between achieving big changes locally or small changes “globally”. It helps to envision the impact of the campaign through utopia/dystopia brainstorms: What would a world look like where all campaign goals have been achieved perfectly? What would it look like if everything went horribly wrong? These kind of mental exercises also help to explain the relevance of the campaign goals and show how the intended change can affect people’s lives. The goals may be perfectly obvious to those passionate about them already, but not to outsiders – a common problem regarding the ethics and ideals of Free Software.

Implementing a campaign involves many standard, by the book project management tasks. The individual publicity stunts and activities are the actions that form the campaign timeline. A dilemma specific to the FSFE is that the relevant and influential media – social networks especially – are the kind of centralized proprietary platforms against which we are advocating. However, we learned that it may be possible to play this situation to our advantage :-) Since the FSFE’s goals require some heavy lifting of Free Software lobbying, the campaign timeline extends far into the future. We found ourselves thinking about what to present at conferences a year or more into the future. Finalizing the campaign plan involves answering the “classical” question of what time, material and talent is required to perform the tasks, and to put them into a timeline. Often this includes outside help for extra manpower or professional expertise. Noticeably, those with technical backgrounds tend to haste towards a release, underestimating the lead time required to get there, and the duration of the campaign. This tendency works almost, but not quite, entirely unlike in software projects. Securing and confirming the support of allies and protagonists also takes time.

The planned actions need to be reviewed with a focus group that resembles or at least understands the target audience. This review should  confirm that the message conveyed is in fact understandable and makes sense. It is not possible to get a clear answer on whether or not a campaign project needs an ultimate decision maker. The answer depends too much on the composition of the campaign team and the timeline of the project. The necessary communication infrastructure is pretty straightforward – tasks boards, and instead and asynchronous messaging. Most Free Software groups use those anyway.

After two and a half days of workshop, all 15 participants ended up rather tired. However we had plenty of fun and learned a lot. Surprisingly, the group came up with a good amount of real, usable ideas for activities. Be very afraid :-) The guidance and mentoring by the experienced campaigners from Peng! Collective helped tremendously. Of course the workshop was merely an exercise in how to develop and run a campaign for software freedom. The bulk of the work is now ahead of us. But we are off to a good start. We are curious where this road will take us.


Filed under: CreativeDestruction, English, FLOSS, KDE, Linux, OSS Tagged: Creative Destruction, FLOSS, free software communities, free software foundation europe, freie software, FSFE, Linux

Sunday, 24 April 2016

LinuxWochen, MiniDebConf Vienna and Linux Presentation Day - fsfe | 06:23, Sunday, 24 April 2016

Over the coming week, there are a vast number of free software events taking place around the world.

I'll be at the LinuxWochen Vienna and MiniDebConf Vienna, the events run over four days from Thursday, 28 April to Sunday, 1 May.

At MiniDebConf Vienna, I'll be giving a talk on Saturday (schedule not finalized yet) about our progress with free Real-Time Communications (RTC) and welcoming 13 new GSoC students (and their mentors) working on this topic under the Debian umbrella.

On Sunday, Iain Learmonth and I will be collaborating on a workshop/demonstration on Software Defined Radio from the perspective of ham radio and the Debian Ham Radio Pure Blend. If you want to be an active participant, an easy way to get involved is to bring an RTL-SDR dongle. It is highly recommended that instead of buying any cheap generic dongle, you buy one with a high quality temperature compensated crystal oscillator (TXCO), such as those promoted by

Saturday, 30 April is also Linux Presentation Day in many places. There is an event in Switzerland organized by the local local FSFE group in Basel.

DebConf16 is only a couple of months away now, Registration is still open and the team are keenly looking for additional sponsors. Sponsors are a vital part of such a large event, if your employer or any other organization you know benefits from Debian, please encourage them to contribute.

Monday, 18 April 2016

On our backend work

Weblog | 07:40, Monday, 18 April 2016

Every half year (starting from the beginning of 2016, so it’s fairly recent), we set organisational goals for our staff. These are usually focused on internal structures and procedures which need to be improved in order to make it easier for our volunteers to do the work they do on the local level.

In my mail to our web discussion list a while ago, I hinted at some changes we’ve done to the backend of our work, and I want to elaborate a little bit more on this.

About a month ago, I introduced a new ticket system built on OTRS, which we’ve now started to make use of, at first for processes which only include staff, but which will eventually expand to touch upon other areas of our work too. The areas where we’ve implemented this ticket system is for merchandise order and internship applications.

To give some background, both of these areas previously depended on mail exchanges. Internship applications, as an example, went to a mailing list on which all staff were subscribed. People would read and comment (occasionally) and one of us would eventually get back to the applicant. We frequently lost track of applications, it was difficult to get an overview, and there were no follow-ups from our side to ensure all applications got a reply.

We’ve now put all internship applications into a specific Queue in our ticket system, and all incoming applications are automatically added there. When an application is added to the ticket system, a confirmation mail is automatically being sent to the applicant letting them know it has been received.

We also manage all communication with the applicant through the ticket system, so everyone from the staff can see who is working on each application (mostly me), and specific tasks can be delegated easily without losing track of anything in the process. This may not sound like much, but it’s already been an excellent help to make sure we don’t miss anything.

For our merchandise orders, this is now managed similarly. Orders which come in get an automatic confirmation from the ticket system that their order has been received. When there’s a payment, there’s also an automatic confirmation, and we can follow up easily on orders which are not getting paid. We can also manage the communication with the persons ordering in a way which is accessible to everyone in our office, so when someone goes on vacation, someone else can easily fill in and follow up on questions or ship merchandise.

Moving forward, I would want to implement more of our processes in this ticket system to make our internal work more coherent, and what I really like personally about having done this work so far is it will now be very easy to allow anyone in the FSFE: a volunteer or Fellow, to also access information in the ticket system which is useful for them. We haven’t implemented any processes in the ticket system which include volunteers yet, but I can see us doing so for a lot of work around events and booths.

Saturday, 16 April 2016

Installing Wallabag 2 on a Shared Web Hosting Service

English – Björn Schießle's Weblog | 21:41, Saturday, 16 April 2016

Wallabag 2.0.1

Wallabag describes itself as a self hostable application for saving web pages. I’m using Wallabag already for quite some time and I really enjoy it to store my bookmarks, organize them by tags and access them through many different clients like the web app, the official Android app or the Firefox plug-in.

Yesterday I updated by Wallabag installation to version 2.0.1. The basic installation was quite easy by following the documentation. I had only one problem. I run Wallabag on a shared hoster, so I couldn’t adjust the Apache configuration to redirect the requests to the right sub-directory, as described by the documentation. I solved the problem with a small .htaccess file I added to the root folder:

<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTP_HOST} ^links\.schiessle\.org$ [NC]
    RewriteRule !^web/ /web%{REQUEST_URI} [L,NC]

I also noticed that Wallabag has a “register” button which allows people to create a new account. There already exists a feature request to add a option to disable it. Because I don’t want to allow random people to register a account on my Wallabag installation I disabled it by adding following additional lines to the .htaccess file:

<FilesMatch ".*register$">
    Order Allow,Deny
    Deny from all

Talk about Instant Messaging with XMPP

Norbert Tretkowski | 05:00, Saturday, 16 April 2016

Last week I gave a talk about Instant Messaging with XMPP at our local Linux User Group regulars table, mostly focusing on XMPP as an alternative for WhatsApp, Threema, Hangouts, Signal and other smartphone messengers. In 2007 I already gave a similar talk, but at that time I focused on XMPP as an alternative for ICQ, AIM, MSN and other desktop messengers.

Friday, 15 April 2016

TLS is a yes

English― | 20:11, Friday, 15 April 2016

Let’s Encrypt has left beta and to celebrate, this blog gained TLS support. \o/ If all goes well it’ll become the default including an HSTS header so everyone can benefit from improved privacy¹.

If you want to give it a try before it becomes cool, feel free to direct your browser to

If you’re unfamiliar with Let’s Encrypt, it’s a certificate authority which provides free TLS certificates. It uses automated process to verify whether certificate’s requestor controls the domain certificate is for and takes literally seconds to complete.

Its sponsors include Mozilla and Google which means that Let’s Encrypt’s certificate is included in those browsers as well as many other software packages and operating systems.

With nearly zero cost for getting a widely accepted certificate, another obstacle for encrypted web is crumbling. And not a moment too soon since aforementioned Mozilla and Chrome duo plan to ‘deprecate’ plain HTTP.

If you’re running your own server there’s no excuse not to use TLS and if you’re hosting provider doesn’t support it, complain and do it loudly.

¹ Especially paranoid readers surely noticed the site uses third-party widgets but those same readers are expected to know how to install uBlock Origin.

Updating the FSFE’s self conception

Weblog | 14:02, Friday, 15 April 2016

A while ago, I wrote on the FSFE’s web mailing list a review I’m currently making of our web pages, and in particular our section which explain the organisation (About). One of the documents in this hierarchy, which I believe is critical to update, is our “Self Conception”.

In principle, the document is good, but in several parts, the document hasn’t been updated with the work distribution and authority of the various organs and thus is not consistent with neither current practice or authority. Most importantly, it pre-dates the Fellowship, and the position of Executive Director. While the document has seen some smaller updates over the years to at least mention Fellows, it’s largely been unchanged since the 2004 version.

When I read through the document, I identified two two critical bugs:

  • it said employees are not part of the decision-making process,
  • it defined employment as a decision of the members.

Neither of this is true: decision making involve anyone who care to participate within one of our teams, regardless of if they’re employees, volunteers, Fellows or part of our members. And as for employment, the only employment decided on by the General Assembly is that of the Executive Director.

I’ve now committed a new revision of the self conception with some limited updates, that bring the self conception closer in line with the actual structure (it’s not perfectly aligned, nor will it be for some time). When discussing this in the core team (which include our members), comments I received were largely in favor of the changes with just some proposal of removing the document completely, which seems like it would’ve been a drastic measure to take.

There’s a lot of work that’s still needed to be done in cleaning up information about the organisation, as well as many formal documents which we’ll eventually need to update (our constitution is not exempt from this; it still contains clauses that define national associations, which we don’t have any in practice, as just one example).

Watch this space for more to come! :-)

Thursday, 14 April 2016

Course “On the Road to the Free Digital Society” is available in Moodle and IMS Common Cartridge formats now

Vitaly Repin. Software engineer's blog | 04:44, Thursday, 14 April 2016

If you are interested in launching your own instance of the Stallman’s course “On the Road to the Free Digital Society”, I have good news for you. I have published the course in Moodle Backup and IMS Common Cartridge formats.

The files are available in the “Download” section of the course website.

I will be happy to hear your feedback about the course!

Don’t forget that we need your help in several areas:

Check “Support” section of the course website for the details.

Original blog post


Monday, 11 April 2016

Structures and memberships

Weblog | 08:24, Monday, 11 April 2016

In January, I elaborated on the structure of the FSFE, owing to a request from our members to work out a plan for making the FSFE more inclusive and transparent. Since then, we’ve taken some actions towards this, including making a transparency commitment which is consistent with the guidelines from Transparency International Germany. We wait for a final approval from them, but our understanding is that everything is now fine (aside from a logo we needed to add, which we added last week).

We still have other work to do, but before this, allow me to recapture parts of the dialogue leading up to this point. At the FSFE General Assembly in 2015, we had an intensive discussion about the structure of the organisation which lead to Matthias, myself, Erik Albers and our fellowship representative Nicholas Dietrich, working out a proposal for changing the structure of the FSFE.

What we proposed then were to increase the number of members of the association, making it easier for active members of our community to become formal members as well as setting up a separate Board of Directors.

From the feedback we received from our current members, we developed an understanding that what is critically missing is perhaps not only ways for people to become formal members, but a clarity of how people can already become members today, something which we admittedly have not been good at communicating.

It’s also been difficult to see what teams exist in the FSFE, what agency the participants have to act within those teams, and how to get involved in the work of the teams.

We’ve started working on both of these topics, but not completed it yet. Our transparency commitment is part of this work, as it gives details about our constitution which elaborates on how to become a member.

This needs to be extended on by more information about membership, such as how the membership applications are evaluated and how to determine whether becoming a member is the right step for someone. The latter is tricky to formulate: we do not wish to exclude anyone from applying to be a member if they have an interest in shouldering such responsibility.

At the same time, I want to make it clear formal membership in the FSFE depend on a deep commitment to Free Software. This is what Matthias and I have come up with so far as to what we feel makes a member of the FSFE:

“A member is someone who is strongly committed to Free Software and feels strongly connected with FSFE. (S)he has the long term goal to empower people to control technology, and can prove this with past activities. The person wants to take responsibilities over the decades to come to make sure FSFE’s work will benefit Free Software and participate in the long term strategic decision making.

If someone applies we prefer the person met other members before, so we can better assess the persons motives.”

Our Fellows and volunteers support us in our day to day work, financially, through volunteer work or both. Their work, as well as the work of our staff, is made possible by the commitment of the members to secure the organisations’ long term goals. Being a member is a responsibility, but an important one.

In the next months, I want to:

  • Improve further on the public information on our web pages and elsewhere about the structure of the FSFE, and how membership works,
  • Create a better overview for anyone as to what teams there are in the FSFE, what they work on, and where someone can engage.

Expect more on this in the months to follow.

Friday, 08 April 2016

Urgent - Help until 10 April to influence how 750 millions will be spent

Matthias Kirschner's Web log - fsfe | 02:30, Friday, 08 April 2016

We were notified of a very interesting consultation by the European Commission. The European Commission is about to allocate 750 million Euro over the next years on the "future internet", but the really important subjects (like: everything we learned from Edward Snowden) are not on their radar - yet.

However, if we bundle our efforts that is something that is definitely within reach. At the moment we are told there are only a couple of dozens of submissions from mostly the usual suspects, so your response would (at least on paper) count for influencing a few million Euro of this budget. It really makes a difference if you submit something, even if it is really short.

Power Infrastructure

For more background you can check out Michiel Leenaars' blog post.

What do you have to do?

Submit your ideas by Sunday 10 April on the European Commission's website.

Do not get distracted by the subtext of the questions. For the first question there is no problem to just answer something like:

"The internet is very broken at the architecture level, which lies at the basis of mass surveillance and the current security and privacy problems. A significant investment from Europe in better standards and Free Software implementing those standards is needed to fix that."

The second question is more tricky, but just add some quick notes, either your positive vision or your negative one how it will look like if certain things are not fixed. Possibly repeat the importance of the issues in the previous question being addressed as a prerequisite for any improvements of the security crisis between now and then.

The third and fourth question are the important ones, here you can submit your ideas. You can start your submission by flagging that the top priority is to repair the fundamental design issues of the core of the internet in the post-Snowden era. Possibly add that new funding strategies are needed which are more agile and responsive to grass roots improvements than the large consortia used in other EC projects, in order to better profit from the deep expertise and strong motivation of the Free Software movement and the technical internet community.

Additionally I suggest to upload papers, articles, with former project proposals, analysis of problems, etc. you wrote as additional files and submit them.

Do not hesitate to forward this information to other people and groups who might have good ideas how to improve the internet of tomorrow.

Thursday, 07 April 2016

Public schools making MS Office mandatory

Being Fellow #952 of FSFE » English | 11:10, Thursday, 07 April 2016

There was an extensive debate on the German discussion list which addressed a lot of aspects that may be relevant to other European countries. I wanted to provide a summary to encourage exchange of information and experiences across borders.

The trigger was a letter that a school kid brought home, informing the parents that a Windows 10 device with MS Office 2013/2016 will be made mandatory to participate in class.

As outrageous this sounds for Free Software supporters, I fear that this is getting common practice throughout Europe and that most parents accept it with a shrug. I’ll be happy for any feedback dispelling or confirming this fear.

Is there a template letter to complain about it?

The original poster asked if there was template letter for such cases that he could use to inform the school that this practice is not what he expects from a public body.

Wouldn’t it be nice to have such a template or maybe even a booklet of templates? As English is most commonly understood in Europe, it would be best to start with an English version and move on with translations into other languages. In fact, creating a section with sample letters has been on our wish list for years already! Feel free to plunge in!

There are currently two versions of the draft: one and two, both German. (By the way: the FSFE maintains a public Etherpad you can use for such cases.)

As the last post in the discussion so far, Max shared some brief findings from the European Free Software Policy Meeting in Brussels, that it is difficult to “convince” in a letter. It is important not to exaggerate and point out the benefits of the recipient.

Advocating Free Software or demand our rights?

It was discussed whether the focus of the letter should be to convince the school that Free Software is a great thing or rather that they are obliged to leave the minority the right to keep using the systems of their choice.

Some may argue that the majority is using Windows anyway and simply won’t care. Does that entitle a public school to force those who do care to give up their freedom and privacy?

Are we in such a weak position that we have to beg the institutions to let us use Free Software or is there any legal ground where we can claim the right to do so?

Use your right to participate!

Either way, we should make our voice heard more often. During the course of the discussion, Michael encouraged parents to use their right to participate in decision making processes in their kids’ schools.
This process is highly regulated in Germany and what parents can actually do is limited but still, they do have a say on school matters. How is this done elsewhere in Europe?

Is this practice even legal?

Public schools force their students/pupils to use a certain operating system with known backdoors, with a certain office suite using a certain cloud software and various kinds of privacy issues, e.g.: storing personal data in a different jurisdiction.

Is this practice legal? The answer seems to vary depending on which federal state in Germany you look at. How is it in your area? Do you know any rules or laws that would prohibit this kind of practice?

A while back in Switzerland, an expert group recommended to use Free Software after analysing Microsoft’s offer called live@edu back then due to privacy and lock-in concerns. Data protection law would prohibit the data collection mentioned in the proposed contract.

Proposed analogies

To make the problem more transparent to the recipient of the letter, it was proposed to ask: “What would you say if a teacher forced the kids to come to the gym with a special model of sneakers?”

It was mentioned that a similar practice is accepted, and even the default, when it comes to school books. The schools decide what books will be used in class. Why should it be any different with Software?

“The Chains of Habit Are Too Light To Be Felt Until They Are Too Heavy To Be Broken.”

Source unknown, sometimes used by Warren Buffet

I am grateful to Bernd who pointed out that these analogies are missing a crucial aspect. What shoes I wear will not change the way I run and I’ll be as fast with a similar pair of shoes as with the ones I was asked to buy for class. A certain schoolbook will not change the way I read nor change my ability to read or understand complex texts in other books.

Software is fundamentally different. Using a certain software program defines a certain work flow and way of thinking. Learning a certain work flow and get effective with it takes time and effort (with any software). Almost nobody has the motivation or resources to constantly change the way to get a routine task done, especially not if one is already comfortable with one. Just ask a vim user to use emacs!

The program I use to do my homework will probably be the same I write my first job applications with. And the file format will most likely be the same as well as the place where I save them “in the cloud”. Forcing pupils to use proprietary software, will push them into the lock-in trap.

Equality of opportunity

or the widening “Rich-Poor Achievement Gap” may be another argument against such practices. What burden may it be for a poor family to purchase a computer that meets the requirements of Windows 10? They have to buy that computer. There is no way around it. So, they will have to relinquish something else like healthy food or family time as they have to spend more time at work.

Bad publicity or positive campaigning

One thesis in the discussion was that only bad publicity will make the school at hand reconsider their practice. FSFE usually tries to follow a different approach. That doesn’t mean we’d ignore bad news and don’t deal with them. The question is: What will make people change their view? I think it is much more sustainable if the people grasp the idea and benefits of Free Software instead of just “being forced to allow it”.

Point out the learning aspect of using Free Software

Geza suggested to mention the pedagogical angle as well. Free Software offers diversity, allows to experiment and try out various alternatives (different editors, programming languages, desktop environments) and thus leads to a competent self determined and responsible handling of the opportunities available.

Part of the problem is that teachers usually don’t know anything else than MS products themselves as they’ve been in the same creature-of-habit cycle as they are about to push their students.

Sample lesson with OneNote

Bernd pointed us to a tutorial video how OneNote can be used in class and had to admit that it looks pretty impressive and that there is probably no Free Software alternative which would allow a similar work flow.

Bernd is missing an easy to use alternative. Without these alternatives, it is difficult to object (object in the sense of “successfully convince others”).

To create a video that starts a thinking process has been on our ToDo list for a while.

Wanted: Show case of Free Software solutions that are actually being used

It was mentioned that with a list of programs, the same thing could be achieved, but it is highly questionable if this zoo of different applications will ever be used in class.

It is clear that a lot of good stuff can be done with Free Software, but we need to show to the interested audience that it is practical as well. We need you! Do you know somebody using Free Software in class that is willing to create a presentation? Do you know presentations that have been given before and were recorded (preferably under a free licence)?

Are you aware of any educational institution that teaches on/about Free Software?

Going-to-be teachers need to see what is possible with Free Software. It needs to be proven that Free Software can deliver exactly what they need.

Not necessarily what they think they need. It’s not my goal to mimic OneNote or other proprietary products. At the end, the work flow in the tutorial wasn’t that smooth either.
DG said: “Pupils may not be nerds but shouldn’t be the school the place to learn how to use digital tools creatively without having a company make a product out of one particular use case? Until this isn’t done in school – teaching how to use digital tools meaningfully and creatively – the perception that Free Software is only for nerds will stick.”

I’ll advertise this summary on the English mailing list. Please join the discussion there or drop me a note if you have anything to contribute. Thanks!

flattr this!

Friday, 01 April 2016

Website move

English― | 15:22, Friday, 01 April 2016

Photo of a truck on a road.

(photo Ikiwaner, CC-BY-SA)

Some regular visitors of the web site may be aware that the page used to run on platform. Some will also be aware that the service closes shop, an act which forced me to move to another hosting.

In moving the page, I’ve tried to keep old URLs work so even though canonical locations for posts have changed, the old links should result in a correct redirect.

This is also true for feeds but while Jogger provided customisation options (RSS and Atom, excerpts only, no HTML and posts count), currently only full-content HTML Atom feeds limited to newest ten entries are provided.

If anything broke for you, please do let me know at

I have not yet figured out what to do with comments which is why commenting is currently unavailable. Since I want my whole page to be completely static, I’m planning on using a third-party widget. So far I’ve narrowed the choice down to HTML Comment Box and the new hotness, Spot.IM. Any suggestions are also welcome.

Graph showing drop in response time      from 300ms to 60ms

On the bright side, the page now loads five times faster! took its sweet time when generating responses. A static page and better optimised infrastructure of my current provider allows to drop response time from 300 to 60 ms.

Website move

English― | 15:22, Friday, 01 April 2016

Photo of a truck on a road.

(photo Ikiwaner, CC-BY-SA)

Some regular visitors of the web site may be aware that the page used to run on platform. Some will also be aware that the service closes shop, an act which forced me to move to another hosting.

In moving the page, I’ve tried to keep old URLs work so even though canonical locations for posts have changed, the old links should result in a correct redirect.

This is also true for feeds but while Jogger provided customisation options (RSS and Atom, excerpts only, no HTML and posts count), currently only full-content HTML Atom feeds limited to newest ten entries are provided.

If anything broke for you, please do let me know at

I have not yet figured out what to do with comments which is why commenting is currently unavailable. Since I want my whole page to be completely static, I’m planning on using a third-party widget. So far I’ve narrowed the choice down to HTML Comment Box and the new hotness, Spot.IM. Any suggestions are also welcome.

Graph showing drop in response time      from 300ms to 60ms

On the bright side, the page now loads five times faster! took its sweet time when generating responses. A static page and better optimised infrastructure of my current provider allows to drop response time from 300 to 60 ms.

Saturday, 26 March 2016

Guake Terminal Improvement for Multi-Monitor Setups

English – Björn Schießle's Weblog | 19:03, Saturday, 26 March 2016

Guake Terminal

Guake Terminal

Guake is a top-down “Quake-style” terminal. I use it on a daily basis on the Xfce desktop. The only drawback, Guake doesn’t work the way I want it on a multi-monitor setup. On such a setup the terminal always starts on the main (left) monitor. But for many people, including myself, the left monitor is the small Laptop monitor. Therefor many people prefer to open the terminal on the secondary (right) monitor. If you search for “Guake multi-monitor” you can find many patches to achieve this behavior.

For me it is not enough that the terminal always starts on the right monitor. I want the terminal to always start at the currently active monitor, the monitor which contain the mouse pointer. Luckily Guake is written in Python, this makes it quite easy to patch it without the need to re-compile and re-package it. Together with the patches already available on the Internet and a short look at the Gtk documentation I found a solution. To always show the terminal on the currently active monitor you have to edit /usr/bin/guake and replace the method get_final_window_rect(self) with following code:

    def get_final_window_rect(self):
        """Gets the final size of the main window of guake. The height
        is the window_height property, width is window_width and the
        horizontal alignment is given by window_alignment.
        screen = self.window.get_screen()
        height = self.client.get_int(KEY('/general/window_height'))
        width = 100
        halignment = self.client.get_int(KEY('/general/window_halignment'))
        # get the rectangle from the currently active monitor
        x, y, mods = screen.get_root_window().get_pointer()
        monitor = screen.get_monitor_at_point(x, y)
        window_rect = screen.get_monitor_geometry(monitor)
        total_width = window_rect.width
        window_rect.height = window_rect.height * height / 100
        window_rect.width = window_rect.width * width / 100
        if width < total_width:
            if halignment == ALIGN_CENTER:
                window_rect.x = (total_width - window_rect.width) / 2
                if monitor == 1:
                    right_window_rect = screen.get_monitor_geometry(0)
                    window_rect.x += right_window_rect.width
            elif halignment == ALIGN_LEFT:
                window_rect.x = 0
            elif halignment == ALIGN_RIGHT:
                window_rect.x = total_width - window_rect.width
        window_rect.y = 0
        return window_rect

This patch is based on Guake 0.4.4. The current stable version is already at 0.8.4 and no longer contain the method shown above. Still version 0.4.4 is in use on the current Debian stable version (Jessie), therefore I thought that it might be useful for more people than just for me.

Friday, 25 March 2016

With Facebook, everybody can betray Jesus - fsfe | 17:43, Friday, 25 March 2016

It's Easter time again and many of those who are Christian will be familiar with the story of the Last Supper and the subsequent betrayal of Jesus by his friend Judas.

If Jesus was around today and didn't immediately die from a heart attack after hearing about the Bishop of Bling (who spent $500,000 just renovating his closet and flew first class to visit the poor in India), how many more of his disciples would betray him and each other by tagging him in selfies on Facebook? Why do people put the short term indulgence of social media ahead of protecting privacy in the long term? Is this how you treat your friends?

More sandboxing

Colors of Noise - Entries tagged planetfsfe | 12:15, Friday, 25 March 2016

More sandboxing

When working on untrusted code or data it's impossible to predict what happens when one does a:

bundle install --path=vendor


npm install

Does this phone out your private SSH and GPG keys? Does a

evince Downloads/justdownloaded.pdf

try to exploit the PDF viewer? While you can run stuff in separate virtual machines this can get cumbersome. libvirt-sandbox to the rescue! It allows to sandbox applications using libvirt's virtualization drivers. It took us a couple of years (The ITP is from 2012) but we finally have it in Debian's NEW queue. When libvirt-sandbox creates a sandbox it uses your root filesystem mounted read only by default so you have access to all installed programs (this can be changed with the --root option though). It can use either libvirt's QEMU or LXC drivers. We're using the later in the examples below:

So in order to make sure the above bundler call has no access to your $HOME you can use:

sudo virt-sandbox \
   -m ram:/tmp=10M \
   -m ram:$HOME=10M \
   -m ram:/var/run/screen=1M \
   -m host-bind:/path/to/your/ruby-stuff=/path/to/your/ruby-stuff \
   -c lxc:/// \
   -S $USER \
   -n rubydev-sandbox \
   -N dhcp,source=default \

This will make your $HOME unaccessible by mounting a tmpfs over it and using separate network, ipc, mount, pid and utc namespaces allowing you to invoke bundler with less worries. /path/to/your/ruby-stuff is bind mounted read-write into the sandbox so you can change files there. Bundler can fetch new gems using libvirt's default network connection.

And for the PDF case:

sudo virt-sandbox \
  -m ram:$HOME=10M \
  -m ram:/dev/shm=10M \
  -m host-bind:$HOME/Downloads=$HOME/Downloads \
  -c lxc:/// \
  -S $USER \
  -n evince-sandbox \
  --env="DISPLAY=:0" \
  /usr/bin/evince Downloads/justdownloaded.pdf

Note that the above example shares /tmp with the sandbox in order to give it access to the X11 socket. A better isolation can probably be achieved using xpra or xvnc but I haven't looked into this yet.

Besides the command line program virt-sandbox there's also the library libvirt-sandbox which makes it simpler to build new sandboxing applications. We're not yet shipping virt-sandbox-service (a tool to provision sandboxed system services) in the Debian packages since it's RPM distro specific. Help on porting this to Debian is greatly appreciated.

Wednesday, 23 March 2016

GSoC 2016 opportunities for Voice, Video and Chat Communication - fsfe | 03:55, Wednesday, 23 March 2016

I've advertised a GSoC project under Debian for improving voice, video and chat communication with free software.

Replacing Skype, Viber and WhatsApp is a big task, however, it is quite achievable by breaking it down into small chunks of work. I've been cataloguing many of the key improvements needed to make Free RTC products work together. Many of these chunks are within the scope of a GSoC project.

If you can refer any students, if you would like to help as a mentor or if you are a student, please come and introduce yourself on the FreeRTC mailing list. If additional mentors volunteer, there is a good chance we can have more than one student funded to work on this topic.

The deadline is Friday, 25 March 2016

The student application deadline is 25 March 2016 19:00 UTC. This is a hard deadline for students. Mentors can still join after the deadline, during the phase where student applications are evaluated.

The Google site can be very busy in the hours before the deadline so it is recommended to try and complete the application at least 8 hours before the final deadline.

Action items for students:

  • Register yourself on the Google Site and submit an application. You can submit applications to multiple organizations. For example, if you wish to focus on the DruCall module for Drupal, you can apply to both Debian and Drupal.
  • Join the FreeRTC mailing list and send a message introducing yourself. Tell us which topics you are interested in, which programming languages your are most confident with and which organizations you applied to through the Google site.
  • Create an application wiki page on the Debian wiki. You are permitted to edit the page after the 25 March deadline, so if you are applying at the last minute, just create a basic list of things you will work on and expand it over the following 2-3 days

Introducing yourself and making a strong application

When completing the application form for Google, the wiki page and writing the email to introduce yourself, consider including the following details:

  • Link to any public profile you have on sites like Github or bug trackers
  • Tell us about your programming language skills, list the top three programming languages you are comfortable with and tell us how many years you have used each
  • other skills you have or courses you have completed
  • any talks you have given at conferences
  • any papers you have had published
  • any conferences you have attended or would like to attend
  • where you are located and where you study, including timezone
  • any work experience you already have
  • any courses, exams or employment commitments you have between 22 May and 24 August
  • anybody from your local free software community or university who may be willing to help as an additional mentor

Further reading

Please also see my other project idea, for ham radio / SDR projects and my blog Want to be selected for Google Summer of Code 2016?.

If you are not selected in 2016

We try to make contact with all students who apply and give some feedback, in particular, we will try to let you know what to do to increase your chances of selection in the next year, 2017. Applying for GSoC and being interviewed by mentors is a great way to practice for applying for other internships and jobs.

Chemnitzer Linuxtage - Misunderstandings, compulsory routers, and new black NoCloud t-shirts

Matthias Kirschner's Web log - fsfe | 00:51, Wednesday, 23 March 2016

Last weekend I was at the Chemnitzer Linuxtage giving a speech and talking with people at the FSFE booth. It was again a well organised event, I got encouraging feedback about my speech, and had interesting talks at the booth.

FSFE booth at Chemnitzer Linuxtage

On Friday I met with Max Mehl in Leipzig and continued to travel to Chemnitz. The hotel owner already knew that there is an event about software at the university. During the check-in he asked us not to break his wifi. When we came back to the hotel, after setting up the FSFE booth at the university the owner again reminded us that his wifi is "a delicate plantlet" (in German "ein zartes Pflänzchen").

My talk about "Clearing up Free Software misunderstandings" was the first in the morning at 9am (I will link the German video recordings here, as soon as they are online). Despite the early time the room was almost full and we had a good discussion after the talk as well as several longer follow-ups at the FSFE booth. People liked the way how I solved the misunderstandings, and the feedback I received encouraged me in my plan to write short articles about the misunderstandings. So you should read more about this talk in the coming weeks.

A lot of the speakers and people from other booth teams are long time sustaining members of FSFE. I spent a lot of time to catch up with them and other dedicated Free Software contributors, to discuss current topics with them, and to hear what they are currently up to.

While I was catching up with people, Katja, Max, and Reinhard handled lots of questions at the booth, as well as making sure that people can get our t-shirts. Especially our NoCloud t-shirts were quite popular, including the new black version.

NoCloud t-shirt at FSFE booth during Chemnitzer Linuxtage

In the afternoon Max gave a talk about FSFE's work on compulsory routers and what you can learn from it (will link to the video as soon as the video is online). Afterwards people offered us their support with technical expertise in the future about router technology, especially DSL and Cable technologies. This will be important for the second half of the year, when the internet service providers have to comply with the new law.

At the booth Max also had to answer lots of questions about the EU radio directive. If you know people with knowledge in the radio field, please ask them to give us feedback for our summary.

Beside that it was good to see that there was a workshop for flashing Libreboot as well as an F-Droid workshop. If you want to help promoting F-Droid, feel free to order our F-Droid leaflets.

Thanks to Katja, Reinhard, Max, and Fabian for making our booth a success, to Rico and Peer for helping with the transportation, and a big thanks to the organisers, who did an amazing job organising a great event to educate people about software freedom.

Thank you card on empty table

PS: The wifi of the hotel still worked when we checked out.

Monday, 21 March 2016

Linux and POWER8 microprocessors

Seravo | 08:23, Monday, 21 March 2016


With the enormous amount of data being generated every day, POWER8 was designed specifically to keep up with today’s data processing requirements on high end servers.

POWER8 is a symmetric multiprocessor based on the power architecture by IBM. It’s designed specifically for server environments to have faster execution times and to really concentrate performing well on high server workloads. POWER8 is very scalable architecture and scales from 1 to 100+ CPU core per server. Google was involved when POWER8 was designed and they currently use dual socket POWER8 system boards internally.

Systems available with POWER8 cpus started shipping in late 2014. CPU clock ranges between 2.5Ghz all the way up to 5.0Ghz. It has support for DDR3 and DDR4 memory controllers. Memory support is designed to be future proof by being as generic as possible.

Open architecture

Design is available for licensing via the OpenPower foundation mainly to support custom made processors for use in cloud computing and applications that need to calculate big amounts of scientific data. POWER8 processor specifications and firmware are available on liberal licensing. Collaborative development model is encouraged and it’s already happening.

Linux has full support of POWER8

IBM has begun submitting code patches for the Linux kernel in 2012 to support POWER8 features. Linux now has full support for POWER8 since version the kernel version 3.8.

Many big Linux distributions, including Debian, Fedora and OpenSUSE has installable iso images available for Power hardware. When it comes to applications almost all software available for traditional cpu architectures are also available for POWER8. Packages build for it usually has the prefix ppc64el/ppc64le or ppc64 when build for big endian mode. There is prebuilt software available for Linux distributions. For example, thousands of Debian Linux packages are available. Remember to limit the search results to include packages for ppc64el to get a better picture what’s available.

While power hardware is transitioning from big endian to little endian, POWER8 is actually bi-endian architechure and it’s capable of accessing data is both modes. However, most Linux distributions concentrate on little endian mode as it has much wider application ecosystem.

Future of POWER8

Some years ago it seemed like that ARM servers were going to be really popular, but as of today it seems that POWER8 is the only viable alternative for the Intel Xeon architecture.

Friday, 18 March 2016

FOSSASIA 2016 at the Science Centre Singapore - fsfe | 07:37, Friday, 18 March 2016

FOSSASIA 2016 is now under way. Debian, Red Hat and Ring (Savoir-Faire Linux) teams are situated beside each other in the exhibit area.

Thanks to Hong Phuc from the FOSSASIA team for helping produce these new Debian banners

Google Summer of Code 2016

If you are keen to participate in GSoC 2016, please feel free to discuss it at the Debian or Ring tables or attend any of the sessions run by potential mentors at FOSSASIA this weekend. Please also see Getting selected for GSoC, attending a community event like FOSSASIA is a great way to distinguish yourself from the many applications who apply each year. (Debian Outreach / GSoC information).

Real-time communications, WebRTC and mobile VoIP at FOSSASIA

There are a number of events involving real-time communications technology throughout FOSSASIA 2016, please come and join us at these:

Event Time
Hands-on workshop with WebRTC and mobile VoIP (1 hr) Saturday, 14:30
Talk: Free Communications with Free Software (20 min) Saturday, 17:40
Ring: a decentralized and secure communication platform Sunday, 13:00

and the full program details, including locations, are in the schedule.

Wednesday, 16 March 2016

Qabel says themselves - not a Free Software/Open Source License

Matthias Kirschner's Web log - fsfe | 11:55, Wednesday, 16 March 2016

Yesterday I read some articles about Qabel, which again talked about "Open Source" or "open licenses", although Qabel is not using a Free Software/Open Source Software license. Dear journalists: their license is a proprietary software license, and is neither approved by the FSF as a Free Software license nor by the Open Source Initiative as an Open Source Software license.

Spanish Military

After some discussions with them in 2014 Qabel themselves now explains that on their website:

The software "Qabel" is licensed under the QaPL, a specially developed proprietary license. The QaPL can neither be classified according to the standards of the Free Software Foundation (FSF), nor to the standards of the Open Source Initiative (OSI) as "Free Software License" or "Open Source License" respectively.

We have decided to do it that way in order to keep the project going, while at the same time ensuring the greatest possible freedom.

The QaPL does not satisfy the standards of those "Open Source" or "Free Software" licenses, due to our non-commercialization and non-military clauses.

The license says that nobody is allowed to use the software commercially without Qabel's consent—meaning you will have vendor-lock-in again—and it restricts who is allowed to use the software.

To explain the problematic about the second restriction, Peter Hecko did a podcast with me about the "anti-military clause" (in German) and I wrote a short summary of the main arguments from the podcast:

Else my main arguments in a nutshell were: “military” is really difficult to define, it is questionable if someone who kills people would stick to a copyright license, or if it would help anything if military is not allowed to use the Free Software. Furthermore I explained that in the Free Software movement—which is a worldwide movement—we have many different value systems. While some values are shared more widely, there are others people disagree on. We would end up with hundreds of licenses or license additions. We already have way too much licenses at the moment, but such usage restrictions would make it almost impossible to develop software together. By using such restrictions you would also make it hard for everybody who wants to do “good” things with software.

US government commits to publish publicly financed software under Free Software licenses

Matthias Kirschner's Web log - fsfe | 05:15, Wednesday, 16 March 2016

At the end of last week, the White House published a draft for a Source Code Policy. The policy requires every public agency to publish their custom-build software as Free Software for other public agencies as well as the general public to use, study, share and improve the software. At the Free Software Foundation Europe (FSFE) we believe that the European Union, and European member states should implement similar policies. Therefore we are interested in your feedback to the US draft.

The US White House

The Source Code Policy is intended for efficient use of US taxpayers' money and reuse of existing custom-made software across the public sector. It is said to reduce vendor lock-in of the public sector, and decrease duplicate costs for the same code which in return will increase transparency of public agencies. The custom-build software will also be published to the general public either as public domain, or as Free Software so others can improve and reuse the software.

The policy in general does not require that already existing custom—developed software be retroactively made available as Free Software if it was developed by third party developers (though it is strongly encouraged to the extent permissible under existing contracts). However, it is encouraged to be retroactively applicable for the existing custom-build software developed by agency employees in the course of their official duties.

As a result, the draft predicts that the policy will contribute to economic growth and innovation, as the public will be able to use and improve the software which was funded with public money.

Until 11 April 2016 the public can comment the draft. If you have any comments, please sent them to FSFE's English discussion list or directly to me. I would like to make sure that we can consider your feedback, before talking with European politicians about this topic.

Monday, 14 March 2016

Federated Sharing – What’s new in ownCloud 9.0

English – Björn Schießle's Weblog | 16:03, Monday, 14 March 2016

Privacy, control and freedom was always one of the main reasons to run your own cloud instead of storing your data on a proprietary and centralized service. Only if you run your own cloud service you know exactly where your data is stored and who can access it. You are in control of your data. But this also introduces a new challenge. If everyone runs his own cloud service it become inevitable harder to share pictures with your friends or to work together on a document. That’s the reason why we at ownCloud are working at a feature called Federated Cloud Sharing. The aim of Federated Cloud Sharing is to close this gap by allowing people to connect their clouds and easily share data across different ownCloud installations. For the user it should make no difference whether the recipient is on the same server or not.

What we already had

The first implementation of Federated Cloud Sharing was introduced with ownCloud 8.0. Back then it was mainly a extension of the already existing feature to share a file or folder with a public link. People can create a link and share it with their friends or colleagues. Once they open the link in a browser they will see a button called “Add to your ownCloud” which enables them to mount the share as a WebDAV resource to their own cloud.


With ownCloud 8.1 we moved on and added the Federated Cloud ID as a additional way to initiate a remote share. The nice thing is that it basically works like a email address. Every ownCloud user automatically gets a ID which looks similiar to Since ownCloud 8.2 the users Federated Cloud ID is shown in the personal settings.


To share a file with a user on a different ownCloud you just need to know his Federated Cloud ID and enter it to the ownCloud share dialog. The next time the recipient log-in to his ownCloud he will get a notification that he received a new share. The user can now decide if he wants to accept or decline the remote share. In order to make it easier to remember the users Federated Cloud ID the Contacts App allows you to add the ID to your contacts. The share dialog will automatically search the address books to auto-complete the Federated Cloud IDs.

What’s new in ownCloud 9.0

With ownCloud 9.0 we made it even easier to exchange the Federated Cloud IDs. Below you can see the administrator setting for the new Federation App, which will be enabled by default.


The option “Add server automatically once a federated share was created successfully” is enabled by default. This means, that as soon as a user creates a federated share with another ownCloud, either as a recipient or as a sender, ownCloud will add the remote server to the list of trusted ownClouds. Additionally you can predefined a list of trusted ownClouds. While technically it is possible to use plain http I want to point out that I really recommend to use https for all federated share operations to secure your users and their data.

What does it mean that two ownClouds trust each other? ownCloud 9.0 automatically creates a internal address book which contains all users accounts. If two ownClouds trust each other they will start to synchronize their system address books. In order to synchronize the system address books and to keep them up-to-date we use the well known and widespread CardDAV protocol. After the synchronization was successful ownCloud will know all users from the trusted remote servers, including their Federated Cloud ID and their display name. The share dialog will use this information for auto-completion. This allows you to share files across friendly ownClouds without knowing more than the users name. ownCloud will automatically find the corresponding Federated Cloud ID and will suggest the user as a recipient of your share.

The screen-shot of the new Federation App shows a status indicator for each server with three different states: green, yellow and red. Green means that both servers are connected and the address book was synced at least once. In this state auto-completion should work. Yellow means that the initial synchronization is still in progress. Creating a secure connection between two ownCloud servers and syncing the users happens in the background. This can take same time, depending on the background job settings of your ownCloud and the settings of the remote server. If the indicator turns red something went wrong in a way that it can’t be fixed automatically. ownCloud will not try to reestablish a connection to the given server. To reconnect to the remote server you have to remove the server and add it again.

If the auto-add option is enabled, the network of known and trusted ownClouds will expand every time a user on your server establish a new federated share. The boundaries between local users and remote users will blur. Each user will stay in control of his data, stored on his personal cloud but from a collaborative point of view everything will work as smooth as if all users would be on the same server.

What will come next? Of course we don’t want to stop here. We will continue to make it as easy as possible to stay in control of your data and at the same time share your files with all the other users and clouds out there. Therefor we work hard to document and standardize our protocols and invite other cloud initiatives to join us to create a Federation of Clouds, not only across different ownCloud servers but also across otherwise complete different cloud solutions.

John Oliver explains encryption

Matthias Kirschner's Web log - fsfe | 11:56, Monday, 14 March 2016

It is difficult to explain people outside the tech-community why encryption is important, but John Oliver mastered this challenge.

John Oliver

Photo by Maryanne Ventrice, CC-BY-2.0

Every time there is a brutal crime, politicians again demand government backdoors for encryption software. Realising the consequences of those demands often takes politicians and the public a long time. Encryption helps all of us to keep us safer against criminals. Government backdoors might benefit security in the short term, but lead to security nightmares for all of us in the middle and long run.

Demonstrating the implications of backdoors is hard. But in his last show, John Oliver made it, and illustrated the importance of encryption (youtube). Those 18 minutes will most likely help you to improve your argumentation for encryption. So definitely worth your time.

Sunday, 13 March 2016

GoVPN: secure censorship resistant VPN daemon history and implementation decisions

stargrave's blog | 10:13, Sunday, 13 March 2016

This article tells about GoVPN free software daemon: why it was born, what tasks it is aimed to solve, technical overview.

Birth and aimed targets.

There are plenty of various data transport securing protocols and implementations. If you just want to connect either two computers, or two networks, then you can use: TLS, SSH, IPsec, MPPE, OpenVPN, tinc and many others. All of them could provide confidentiality, authenticity of transmitted data and both sides authentication.

But I, being an ordinary user, found that lacking of strong password authentication capability is very inconvenient. Without strong password based authentication I always have to carry high entropy private keys with me. But, being the human, I am able to memorize long passphrases, that have enough entropy for authenticating myself and establishing secure channel.

Probably the most known strong password authentication protocol is Secure Remote Password (SRP). Except for various JavaScript-based implementations, I know only lsh SSHv2 daemon supporting SRP and GnuTLS supporting TLS-SRP. Replacing OpenSSH with lsh is troublesome. TLS-SRP must be supported not only by the underlying library. So SRP hardly can be used in most cases in practice.

My first target: strong password authentication and state-of-art robust cryptography.

Moreover the next problem is protocols and code complexity. Is is the strongest enemy of security and especially all cryptography-related solutions. TLS is badly designed (remember at least MAC-then-Encrypt) and the most popular OpenSSL library is hated by overall humanity. OpenSSH gained -etm MAC modes not so long ago. IPsec is good protocol, but its configuration is not so easy. OpenVPN is working and relatively simple solution, but it is not aware of modern fast encryption and authentication algorithms. And the codebase of all those projects is big enough not to look at, but just trust and hope that no serious bugs won’t be found anymore. OpenSSL demonstrates us that huge open-source community is not enough for finding critical bugs.

My second target: KISS, small codebase and simple, very simple reviewable code and protocol. Without unnecessary complexity, without explicit compatibility with previous solutions.

The next question I am aware of is: why all those existing protocols are too easy to distinguish one from another and filter on DPI-level state firewalls? Basically I do not have much against censorship, because it is necessary anyway, but DPI solutions, as a rule, are so crude and clumsy that deal big harm to innocent servers and users, destroying the Internet as a fact, leaving only huge Faceboogle corporations alive. I like all-or-nothing solutions: either I have got working data transmission routed payed channel through the ISP, or I have got nothing, because no — having only Facebook, YouTube, Gmail and VKontakte access is useless to me at all.

My third target: make more or less censorship-resistant protocol, where nobody can distinguish it from, for example, cat /dev/urandom | nc remote.

And of course, zero target: make it free software, without any doubts, so everyone can benefit from its existence.

Daemon overview.

GoVPN does not use any new fresh technologies and protocols. It does not use not well-studied cryptographic solutions. I do not violate the rule: do not create and implement crypto by yourself. Well, less of more. All critical low-level algorithms, except for some simple ones, are included and written by true crypto gurus. All cryptography must be proved by time.

I decided to use Go programming language. It is mature enough for that kind of tasks, very easy to read and support. Simplicity, reviewability and supportability can easily be achieved with it.

From VPN daemon point of view, here is its current state:

  • Works with layer 2 TAP virtual network interfaces.
  • Single server can work with multiple clients, each with its own configuration, possible up/down-hooks.
  • Works over either UDP, TCP, or HTTP proxies with CONNECT method. IPv4/IPv6 supported.
  • Client is single executable binary with a few command line options. Server is a single executable binary with single YAML configuration file.
  • Built-in rehandshaking and heartbeating.

Client authentication tokens.

All client are identified by 128-bit random number. It is not explicitly transmitted in the clear — so others can not distinguish one client’s session from another. Mutual client-server authentication is performed using so-called pre-shared verifier. Client’s identity, verifier and memorable passphrase is everything you need. Example client id with verifier is:


Transport protocol.

Let’s dive deeper in its protocol. Basically it includes: transport protocol and handshake protocol.

Transport protocol is very straightforward from modern cryptographic point of view. Basically it is similar (but not the same) to Bernstein’s NaCl solution:


Tag is Poly1305 authentication over the whole data packet. Nonce is the incrementing counter (odd values are server ones, even are client’s). Encryption is done over padded payload with Salsa20 symmetric encryption algorithm.

Nonce is not secret information, so can be sent in the clear. But it will be easily detected and censored — one knows that this is some kind of nonce-encrypted traffic. So I decided to obfuscate it using PRP (pseudo random permutation) function XTEA. It is very simple in implementation and fast enough for short (8 byte) payloads. It does not add any security, but randomizes the data making DPI censorship the hard task. Nonce encryption key is derived from the session one after the handshake stage.

Nonce is used for replay-attack detection and prevention. We memorize the previous ones and check if they are met again. In TCP mode all messages have guaranteed delivery order, so any desynchronization leads to immediate disconnection. In UDP mode messages can be delivered in varying time, so we have small bucket storage of nonces.

Most protocols does not hide underlying messages lengths. Data can stay confidential, but its size and time of appearance can tell much about traffic inside the VPN. For example relatively easily you can tell that DHCP is passing through the tunnel. Moreover you can watch impact of data transmission inside the tunnel and external system’s behaviour. This is metainformation leak.

Noise can be used to hide message length. GoVPN pads the payload before encryption by appending 0×80 and necessary number of zeros. Anyway after encryption they will look like pseudo-random noise. Heartbeat packets have zero payload length, consisting only of padding. All packets will have the same (maximal) size. Of course this consumes the traffic, so it can be rather expensive.

PAYLOAD || 0x80 || 00 || ...

Authentication tag looks like noise that never repeats among all sessions (probability is negligible), encrypted nonce with ephemeral session key also repeats with negligible probability, and an encrypted payload also look like noise. Adversary does not see any structure.

GoVPN also can hide messages timestamps: time of their appearance. Idea is pretty simple and similar to the noise: constant packet rate traffic. Your tunnel will have fixed transmission speed. Large data amount will be slowly transmitted, while absence of the real payload will be hidden with zero-sized (but padded) packets. One can not distinguish the “empty” channel from the loaded one.

Why nonce is located at the end of the packet? Because we do not have already separated one from another messages in TCP mode, unlike UDP. In TCP mode we have got stream of pseudo-random bytes. But it guarantees order of delivery — so we can predict the next nonce value. As we know nonce PRP encryption key, we can also predict its real value. So we just wait for that expected value to determine the borders of transmitted message. We can not add clearly visible structure, because it will be visible also to DPI system and thus can be censored.

Salsa20 encryption key is generated every time for each session during handshake procedure. It is ephemeral — so compromising of your passphrase can not reveal encryption and authentication keys. This is called perfect forward secrecy (PFS) option. Poly1305 uses one-time authentication keys derived from Salsa20′s ciphertext, similarly to NaCl. Unlike many block-cipher based modes and implementations, Salsa20+Poly1305 does not consume entropy for any kind of initialization vectors.

Handshake protocol.

The most complex part is the handshake procedure.

At first, you need Diffie-Hellman protocol. It is simple, well-studied and de-facto protocol for establishing ephemeral session keys. Our choice is curve25519 protocol. It could be very trivial:

┌─┐          ┌─┐
│C│          │S│
└┬┘          └┬┘
 │  CDHPub    │
 │            │
 │  SDHPub    │
 │            │

Peers send their public curve25519 public keys and performs computation that should result in identical result. That result is not random data ready to be used as a key, but elliptic curve point. We can hash it for example to make it uniform pseudo-random string — session key.

SessionKey = H(curve25519(ourPrivate, remotePublic))

This scheme of course can not be used because it lacks peers authentication. We can use encrypted key exchange (EKE) technique: encrypt Diffie-Hellman packets with pre-shared symmetric secret. That way we provides indirect authentication: if any peer does not know shared symmetric secret, then it won’t decipher public key correctly and derive the same session key. For symmetric encryption we could use Salsa20:

┌─┐                     ┌─┐
│C│                     │S│
└┬┘                     └┬┘
 │enc(SharedKey, CDHPub) │
 │                       │
 │enc(SharedKey, SDHPub) │
 │                       │

Salsa20 is a stream cipher, so it is fatal if encryption parameters are used twice. Our shared secret is constant value, so we have to provide random nonce R each time. It is not secret information, so we can send it in the clear. The response packet from the server can increment it to derive another usable nonce value:

┌─┐                           ┌─┐
│C│                           │S│
└┬┘                           └┬┘
 │R, enc(SharedKey, R, CDHPub) │
 │                             │
 │enc(SharedKey, R+1, SDHPub)  │
 │                             │

We can not use low-entropy passwords for SharedKey in the scheme above. One can intercept our packets and brute-force (dictionary attack) the password, checking on each attempt if deciphered message contains elliptic curve point. Problem here is that adversary is capable to understand if he decrypted the message successfully.

Thank goodness for Elligator encoding algorithm! This encoding is capable to encode some elliptic curve points to the uniform string and vice versa. Not all points can be converted — only a half in the average, so we could generate ephemeral curve25519 keypairs more than once during single session. By applying this encoding we remove adversary’s ability to distinguish successful decryption from the failed one — any plaintext will look like uniform pseudo-random string. That solution is commonly called password authenticated key agreement (PAKE).

┌─┐                              ┌─┐
│C│                              │S│
└┬┘                              └┬┘
 │R, enc(Password, R, El(CDHPub)) │
 │                                │
 │enc(Password, R+1, El(SDHPub))  │
 │                                │

But we still do not authenticate peers explicitly. Of course if our passwords are not equal, then derived session key will be wrong and transport layer authentication will fail immediately, but nobody guarantees us that transport layer will transmit packets immediately after handshake is completed.

For that task we just send random number using the session-key and wait for the same response from the remote side. So client authentication will look like this (RS is the server’s random number):

┌─┐                                            ┌─┐
│C│                                            │S│
└┬┘                                            └┬┘
 │       R, enc(Password, R, El(CDHPub))        │
 │                                              │
 │enc(Password, R+1, El(SDHPub)), enc(K, R, RS) │
 │                                              │
 │                                              ────┐
 │                                                  │ compare(RS)
 │                                              <───┘
 │                                              │

And to perform mutual authentication we do the same (RC is client’s random number):

┌─┐                                            ┌─┐
│C│                                            │S│
└┬┘                                            └┬┘
 │       R, enc(Password, R, El(CDHPub))        │
 │                                              │
 │enc(Password, R+1, El(SDHPub)), enc(K, R, RS) │
 │                                              │
 │                                              ────┐
 │                                                  │ compare(RS)
 │                                              <───┘
 │                                              │
 │               enc(K, R+2, RC)                │
 │                                              │
 ────┐                                          │
     │ compare(RC)                              │
 <───┘                                          │

This is under question is it needed, but some protocols provide explicit pre-master keys, master key sources. Diffie-Hellman derived keys may contain not enough entropy for long-time usage. So we additionally transmit pre-master secrets (this is terminology is taken from TLS) from both sides: 256-bit random strings. Resulting master session key that will be used in the transport protocol is just a XOR of two pre-master keys. If one communication party does not behave honestly and does not generate ephemeral keys every time — XORing its permanent keys with the random ones of the honest one will give your perfect forward secrecy anyway. SC and SS are pre-master keys of the client and server sides.

┌─┐                                               ┌─┐
│C│                                               │S│
└┬┘                                               └┬┘
 │        R, enc(Password, R, El(CDHPub))          │
 │                                                 │
 │enc(Password, R+1, El(SDHPub)), enc(K, R, RS+SS) │
 │                                                 │
 │                                                 ────┐
 │                                                     │ compare(RS)
 │                                                 <───┘
 │                                                 │
 │                enc(K, R+2, RC)                  │
 │                                                 │
 ────┐                                             │
     │ compare(RC)                                 │
 <───┘                                             │

Augmented EKE.

Are we satisfied now? Not yet! Our password is known both to client and server. If the later one is compromised, then adversary get our secret. There are so-called augmented encrypted key exchange protocols. Actual secret is kept only on client’s side. Server side keeps so called verifier — something that can approve client knowledge of the secret.

That kind of proof can be achieved using asymmetric digital signatures. So we use the passphrase as an entropy source for creating digital signature keypair. Its public key is exactly that kind of verifier that will be stored on the server’s side. For convenience we use hash of that public key as a key for symmetric encryption in EKE protocol.

For proving the knowledge of the secret key we have to make a signature with it. We just sign our handshake ephemeral symmetric key. H() is the hash function (BLAKE2b algorithm), DSAPub is the public key derived from user’s passphrase (ed25519 algorithm).

┌─┐                                                ┌─┐
│C│                                                │S│
└┬┘                                                └┬┘
 │        R, enc(H(DSAPub), R, El(CDHPub))          │
 │                                                  │
 │enc(H(DSAPub), R+1, El(SDHPub)), enc(K, R, RS+SS) │
 │                                                  │
 │                                                  ────┐
 │                                                      │ compare(RS)
 │                                                  <───┘
 │                                                  │
 │                                                  ────┐
 │                                                      │ Verify(DSAPub, Sign(DSAPriv, K), K)
 │                                                  <───┘
 │                                                  │
 │                 enc(K, R+2, RC)                  │
 │                                                  │
 ────┐                                              │
     │ compare(RC)                                  │
 <───┘                                              │

I want to note again: R, El(…), all sent ciphertexts — all of them looks like a random strings for the third party that never repeat and does not have any visible structure. So DPI hardly can determine is it GoVPN’s handshake messages.

Elligator encoding of curve25519 public keys provides zero-knowledge strong password authentication, that is immune to offline dictionary attacks. Even if our password is “1234″ — you can not check in offline if it is true while having all intercepted ciphertexts.

Server does not know our cleartext secret passphrase — it knows only its derivative in the form of public key. But it still can be dictionary attacked. If server’s verifiers are compromised, then you can quickly check if public key (verifier) corresponds for example to “1234″ password.

We can not protect ourselves from this kind of attack. Strong passphrases still is important. But at least we can harden dictionary attack by strengthening those password. It is well known practice: PBKDF2, bcrypt, scrypt and similar technologies. As a rule they contain some very slow function (to decrease attack rate) and a “salt” for increasing the entropy and randomizing equal passwords.

We use password hashing competition winner: Argon2 algorithm. Client’s identity used a salt. ed25519 keypair is generated from the strengthened password derivation. It is computed only during session initialization on the client side once.

PrivateKey    Verifier -----> Server storage
    ^         ^
    |        /
    |       /
    |      /
                  Argon2(Password, salt=ClientId)
                                        ClientId = random(128bit)

DPI resistant handshake packets.

And again there is still another problem: we have not yet transmitted our client’s identity. Server does not know what verifier must be used for handshake processing. If we transmit it in clear, then third party will see the same repeated string during each handshake. It does not harm confidentiality and security, but it is the leakage of deanonymization metainformation.

Moreover all handshake packets have the same size and behaviour: 48 bytes from client to server, 80 bytes response, 120 bytes again, 16 bytes response. Handshake behaviour still differs from the transport one.

Each handshake packet is padded similarly to transport messages:

HANDSHAKE MSG = [R] || enc(PAYLOAD || 0x80 || 0x00 || ...)

After its encryption we have got pseudo-random noise with maximal size indistinguishable from other packets.

And each handshake packet has appended so called IDtag. This tag is XTEA encryption of the first 8 bytes of the message using client’s identity as a key. When server gets handshake messages it takes all known client identities and tries to decrypt last 8 bytes and compare it with the first 8 bytes of the message. Of course this search time grows linearly with the number of clients, but XTEA is pretty fast and that searching is needed only during handshake messages processing.

      HANDSHAKE MSG = [R] || enc(PAYLOAD || 0x80 || 0x00 || ...) ||
XTEA(ClientId, 8bytes([R] || enc(PAYLOAD || ...)))

This feature is also good at saving server’s resources: it won’t try to participate in handshake with unknown clients. So adversary can send any random data and receive nothing in response.

But an adversary can intercept the first client’s handshake message and repeat it again. Because it is valid from the server’s point of view: it will respond to it. You can not finish that handshake session, but at least you know that GoVPN server is sitting on that port and it knows that client’s identity.

To mitigate this kind of attack, we use synchronized clocks. Well, dependency on time is an awful thing. It complicates things very much. So this is only an option. To randomize client identities we just take current time, round it to specified amount, for example ten seconds, and XOR with the client’s identity — every ten seconds an encryption key for IDtag is altered.

               HANDSHAKE MSG = [R] || enc(PAYLOAD || 0x80 || 0x00 ...) ||
XTEA(TIME XOR ClientId, 8bytes([R] || enc(PAYLOAD || ...)))

At last we are quite satisfied with that protocol. Of course you must use strong passphrase and high quality entropy source for ephemeral keys and random numbers generation.

Additional remarks.

Not all operating systems provide good PRNG out-of-box. GoVPN has ability to use other than /dev/urandom entropy sources through Entropy Gathering Daemon compatible protocol.

GoVPN is only layer-2 VPN daemon. It knows nothing about layer-3 IP addresses, routes and anything close to that subject. It uses layer-2 TAP interfaces and you have to manually configure and control how you clients work with the routing and addresses. There are support for convenient up and down scripts executed after session either initialization or termination.

I thought about making some kind of stunnel replacement from it, for example tunneling of either single TCP connection, or externally executed command’s stdin/stdout. But all of this are much more complicated task comparing to the VPN. I decided that you should use specialized tools for all of this. Anyway you can use GoVPN for creating IPv6 link-local only small networks where all you socat, stunnel, SSH, whatever works.

Encryptionless mode.

GoVPN also includes so called encryptionless mode of operation. Its necessity is under question and mainly theoretical.

Assume that you operate under jurisdictions where using of encryption functions is illegal. This mode (actually XTEA PRP encryption of the nonce is still performed) uses only authentication functions. Unfortunately it is much more resource and traffic hungry.

This mode is based on relatively old Ronald L. Rivest’s work about “chaffing and winnowing”. Additionally it uses another well known all-or-nothing transformation (AONT): Optimal Asymmetric Encryption Padding (OAEP). Actually OAEP is slightly changed: length field replaced with hash-based checksumming taken from SAEP+.

Chaffing-and-Winnowing idea is pretty simple in our context: except sending just only single bit of required data, you always send two bits, always 0 and always 1. But you also provide authentication information for each of them: so you can distinguish the bit you really need from the junk (chaff).

For each input byte (8 bits) you send 16 MACs. Odd ones are for 0 bit value, even are for 1 bit value. Only single valid MAC in the pair is allowed.

   MAC00 || MAC01 || MAC02 || MAC03 || MAC04 || MAC05 || MAC06 || MAC07 ||

|| MAC08 || MAC09 || MAC10 || MAC11 || MAC12 || MAC13 || MAC14 || MAC15

In that example we have 0, 1, 1, 1, 1, 0, 0, 0 valid bits and byte 01111000.

GoVPN uses Poly1305 as a MAC. So for transmitting single byte we spent 256 bytes of real traffic: 16 128-bit MACs. Each Poly1305 requires one-time authentication key. We take them from XSalsa20 output stream. XSalsa20 differs from Salsa20: it uses longer 192-bit nonces.

MAC00Key, MAC01Key, ... = XSalsa20(
    nonce=PacketNum || 0x00 ... || ByteNum,
    plaintext=0x00 ...

As session key is unique for each session and packet numbers do not repeat, we guarantee that one-time authentication keys won’t repeat too.

Sending 256 times more traffic is really very expensive. So AONT can help us here. Its idea is simple: either provide all bits of the message to retrieve it, or you won’t recover anything from it. The main difference of AONT from the encryption: it is keyless. It is just a transformation.

AONT takes message M and some random number r. AONT package consists of two parts P1, P2:

PKG = P1 || P2
 P1 = expand(r) XOR (M || H(r || M))
 P2 = H(P1) XOR r

|         M             | H(r || M) |
          |                  ^
          |                   \
          .                    \
         XOR <-- expand(r)  XOR
          |                         \
          |                          \
          .                           .
|        P1                         | P2 |

If any of your bit in either P1 or P2 is tampered — you will detect this. We use BLAKE2b as a hash function H() and Salsa20 as an expander for the random number. r is used as a key for Salsa20.

Only 16 bytes (128-bit security margin) of this AONT package are chaffed-and-winnowed during transmission. We use 256-bit random number during AONT packaging. So each transmitted packet requires 16 * 256 + 32 = 4128 bytes of overhead. Comparing to 1500 MTU bytes this is not so huge value as 256 times more of clear chaffing-and-winnowing.


  • We have got strong password authenticated augmented key agreement protocol with zero-knowledge mutual peers authentication.
  • Authentication tokens are resistant to offline dictionary attacks even if server’s database/hard drive is compromised.
  • Replay attack protection, perfect forward secrecy.
  • DPI resistance: all transport and handshake messages looks like random data without any repeating structure. Message lengths and timestamps can be hidden with the noise.
  • Relatively small codebase:
    • 6 screens of transport protocol;
    • 7 screens of handshake protocol;
    • 2 screens of verifier related code;
    • 2 screens of chaffing-and-winnowing related code;
    • 1 screen of AONT related code;
    • 3+3 screens (UDP and TCP) of server related main code;
    • 2+2 screens (UDP and TCP) of client related main code.
  • Enough throughput performance: my Intel i5 notebook CPU under Go 1.5 gives 786 Mbps of UDP packets throughput.

Saturday, 12 March 2016

Google Summer of Code opportunities for ham radio and SDR - fsfe | 20:39, Saturday, 12 March 2016

I've started preparing some ideas for Google Summer of Code projects I'd be willing to help mentor this year and one of them is for ham radio, with a focus on software defined radio (SDR).

Can you help?

If you can help as a co-mentor or simply help refer students for this project please get in touch with me.

Ander the terms of the program students are paid a $US 5,500 stipend by Google and source code is fully published under a genuine free software license .

Details about the project and how students can apply

Students applying for this project are invited to submit two applications, one under the GNU Radio project and another under the Debian Project.

The aim of this project is to make ready-to-run solutions for ham radio enthusiasts.

The typical use case is a ham who has a spare computer in his shack, he should be able to boot the computer from DVD or USB stick using the Debian Ham Radio Live Blend or the GNU Radio Live SDR and have a functional transceiver within a few minutes.

A student may not be able to do everything required for this project in one summer. We are looking for a student who can make any incremental improvement to bring us closer to this goal.

Here are some of the tasks that may be involved:

  • survey the existing GNU Radio samples for ham radio, many are listed on the HamRadio page of the GNU Radio wiki.
  • design user interface improvements for the samples to make them more intuitive to new users and traditional radio operators. Consider how they can interact with Hardware such as a VFO tuning knob, PTT microphone switch and even a morse key.
  • look through the other packages in the Debian Ham Radio metapackage list and consider how they could interact with GNU Radio. In particular, we are interested in the use of message bus solutions, such as ZeroMQ or D-Bus - for example, GNU Radio could send alerts on the bus when incoming signals exceed the squelch threshold. GNU Radio could also receive events over a message bus, for example, patching GPredict to send Doppler shift information.
  • developing and packaging libraries needed to process digital voice transmissions
  • look at how one or more of the samples can be deployed as a Debian package so users can just install the package and have a working radio


The following experience is highly desirable:

  • ham radio license
  • GNU/Linux skills (Debian or Ubuntu or another distribution like Fedora)
  • use of version control systems (Git)
  • C++ or Python or both


  • Daniel Pocock (VK3TQR/M0GLR/HB9FZT)

Application process

To apply

  • please introduce yourself on both the GNU Radio mailing list and the Debian Hams mailing list
  • Fill in the formal application for both GNU Radio and Debian
  • Pick some items from the list above or feel free to suggest another piece of work relevant to this theme. Give us a detailed, week-by-week plan for completing the task over the summer.
  • find at least one other member of the GNU Radio or Debian community who is willing to be a co-mentor on the project. Please try communicating with us over IRC or email and give us examples of your existing work on Github or elsewhere.

Wednesday, 09 March 2016

Contatacs, CardDAV, Calypso and the N900

Colors of Noise - Entries tagged planetfsfe | 07:38, Wednesday, 09 March 2016

As a follow up to calendar synchronisation with calypso, syncevolution and the N900 running maemo I finally added contacts to the mix:

on the phone

When you have the calendar sync already running it's as simple as:

First start ssh on the n900 to ease typing:

apt-get install dropbear
echo /bin/sh >> /etc/shells
cd /etc/dropbear && ./run

SSH into the phone and configure contacts synchronization:

cat <<EOF > ~/.config/syncevolution/webdav/sources/addressbook/config.ini
backend = CardDAV
database =

And perform the initial sync:

syncevolution --sync slow webdav addressbook

From there on you can sync contacts and calendars in one go with:

syncevoluton webdav

Looking at the calypso logs on the server it seems that syncevoluton does not always generate an FN entry and so the card gets skipped. This doesn't harm the overall sync, but I need to have a look how to fix this.

on the laptop

In order to use the contacts im mutt there's pycarddav packaged in Debian. This is basically following upstreams documentation.

sudo apt-get install pycarddav
mkdir -p ~/.config/pycard
cp /usr/share/doc/pycarddav/examples/pycard.conf.sample ~/.config/pycard/pycard.conf
# Edit file as needed

cat ~/.config/pycard/pycard.conf
[Account username]
user: username
write_support = YesPleaseIDoHaveABackupOfMyData

where: vcard


debug: False

To use the entries in mutt add the just extend your .muttrc:

cat <<EOF >>~/.muttrc
set query_command="pc_query -m %s"
macro index,pager B "<pipe-message>pycard-import<enter>" "add sender address to pycardsyncer"

This allows you to query contacts using Q and add new conatcs with CTRL-B in mutt's index and pager.

Calypso Changes

We recently moved calypso's git repository to alioth and started to merge several out of tree patches. More will happen during this years Debian Groupware Meeting including a new upload to Debian.

Tuesday, 08 March 2016

Telekom router blocks mail servers

Matthias Kirschner's Web log - fsfe | 03:39, Tuesday, 08 March 2016

Yesterday, a friend of mine had the problem that she was not able to sent e-mails from her newly installed e-mail client. Everything worked well at the weekend, when she was setting it up at my place. She tried to change the ports in the e-mail settings, but without success. The e-mail setup which just worked fine two days ago, did not work anymore.

Luckily I remembered an e-mail by Michael Kappes who wrote my in January about routers from the German Telekom, which were blocking sending out e-mails after a software update. It turned out that this was also the case at my friend's place who is using a Speedport W724V Typ C from German Telekom.

If you find yourself in the situation that you are behind such a router which does not allow you to send out e-mails, here an easy way to solve the problem: Login on http://speedport.ip in your Web browser (or directly the IP address something like If you or the person responsible for the router did not change the password, it is in the line below speedport.ip. Then you have to go to Internet -> E-mail abuse detection and disable use e-mail abuse detection.

Settings for Telekom router

My understanding is that the Telekom wants to prevent spam mails from hijacked computers in their customer's networks. But their solution to whitelist a few SMTP servers and blacklisting all the rest is also problematic, and might be seen as anti-competitive. (Does anyone know how you can add your SMTP server to their whitelist, so it is included in future updates?)

It was good reminder why it was important as FSFE to work on compulsory routers in Germany. As several organisation, including FSFE, were able to convince the German Government about that, from summer 2016 onwards we can use routers of our choice. This way I am optimistic that in future there will be routers running Free Software which allow us to send out our e-mails.

Included in Planet FSFE

Norbert Tretkowski | 00:30, Tuesday, 08 March 2016

Beside Planet Debian and Planet MySQL, this blog is now also included in Planet FSFE. Hi! :-)

Monday, 07 March 2016

Informing investigative journalists about Free Software at Logan CIJ

Matthias Kirschner's Web log - fsfe | 08:24, Monday, 07 March 2016

On Friday 11 and Saturday 12 March Free Software Foundation Europe will have a booth at the Logan CIJ Symposium. It is organised by the Centre for Investigative Journalism (CIJ). We will be there with a booth to inform participants about email encryption, Free Software on mobile phones, and in general why Free Software is important for their work.

Banner from Logan CIJ Symposium

On Friday Polina and Nicola will be there, and on Saturday I will join Nicola at the booth. I am already looking forward to meet many interesting people at the Berlin Congress Center (BCC).

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English―  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog