Planet Fellowship (en)

Tuesday, 26 August 2014

GSoC talks at DebConf 14 today - fsfe | 16:33, Tuesday, 26 August 2014

This year I mentored two students doing work in support of Debian and free software (as well as those I mentored for Ganglia).

Both of them are presenting details about their work at DebConf 14 today.

While Juliana's work has been widely publicised already, mainly due to the fact it is accessible to every individual DD, Andrew's work is also quite significant and creates many possibilities to advance awareness of free software.

The Java project that is not just about Java

Andrew's project is about recursively building Java dependencies from third party repositories such as the Maven Central Repository. It matches up well with the wonderful new maven-debian-helper tool in Debian and will help us to fill out /usr/share/maven-repo on every Debian system.

Firstly, this is not just about Java. On a practical level, some aspects of the project are useful for many other purposes. One of those is the aim of scanning a repository for non-free artifacts, making a Git mirror or clone containing a dfsg branch for generating repackaged upstream source and then testing to see if it still builds.

Then there is the principle of software freedom. The Maven Central repository now requires that people publish a sources JAR and license metadata with each binary artifact they upload. They do not, however, demand that the sources JAR be complete or that the binary can be built by somebody else using the published sources. The license data must be specified but it does not appeared to be verified in the same way as packages inspected by Debian's legendary FTP masters.

Thanks to the transitive dependency magic of Maven, it is quite possible that many Java applications that are officially promoted as free software can't trace the source code of every dependency or build plugin.

Many organizations are starting to become more alarmed about the risk that they are dependent upon some rogue dependency. Maybe they will be hit with a lawsuit from a vendor stating that his plugin was only free for the first 3 months. Maybe some binary dependency JAR contains a nasty trojan for harvesting data about their corporate network.

People familiar with the principles of software freedom are in the perfect position to address these concerns and Andrew's work helps us build a cleaner alternative. It obviously can't rebuild every JAR for the very reason that some of them are not really free - however, it does give the opportunity to build a heat-map of trouble spots and also create a fast track to packaging for those heirarchies of JARs that are truly free.

Making WebRTC accessible to more people

Juliana set out to update and this involved working on JSCommunicator, the HTML5/JavaScript softphone based on WebRTC.

People attending the session today or participating remotely are advised to set up your RTC / VoIP password at well in advance so the server will allow you to log in and try it during the session. It can take 30 minutes or so for the passwords to be replicated to the SIP proxy and TURN server.

Please also check my previous comments about what works and what doesn't and in particular, please be aware that Iceweasel / Firefox 24 on wheezy is not suitable unless you are on the same LAN as the person you are calling.

Why I want to update the User Data Manifesto

Hugo - FSFE planet | 09:39, Tuesday, 26 August 2014

In late 2012, a new manifesto emerged from the free software community: The User Data Manifesto, written by Frank Karlitschek of Owncloud. Quite similar to the Franklin Street Statement on freedom and network services, the manifesto was taking another approach which I think was good: identifying a new set of rights for users, or as the manifesto puts it: “defining basic rights for people to control their own data in the internet age.”

I have applauded the approach and I think the current manifesto is a good starting point – which is why I have started an effort to create a new better version built on the first version. If you are interested directly into discussing the new version then you can skip the first part of this article.

This page uses fragmentions JS. Link to any bit of this document by appending to the URI: ##any bit of this doc.


<script src="" type="text/javascript"></script>

What’s wrong with the current version?

Right now, the manifesto consists of 8 points — and I think that’s probably too much. As you will see, some of these points overlap. Another thing that’s wrong with the current version is that it mixes several issues together with no hierarchy or context between these; for instance, some points are about user rights, some others are about implementation only (like point 8. Server software transparency).

So let me take some points separately:

1 - Own the data
The data that someone directly or indirectly creates belongs to the person who created it.

This one is very, very problematic. What does “belong” mean, what does “own” mean? Why is one used in the title and the other in the description? What happens when several persons “created” data. What does “create [data]” even mean? I don’t create “data”, my computer generates data when I do things and make stuff.

This point could be read like a copyright provision and thus justify current copyright laws. This is probably not the intention behind this. So this point should be fixed. This reason alone is enough to make it a necessity to update the current manifesto.

But what was the intention behind this?

<aside class="pull-left">

1991, Sega’s Zero Wing
Zero Wing screenshot

I think I understand it, and I agree with it. Maybe you know the meme “All your base are belong to us” sometimes deviated into “All Your Data Are Belong to Us” in reference to Google/NSA/etc.

This is basically what we want to prevent. For a user data manifesto to be effective, it means that even if I use servers to store some of my data, it does not mean that the server admin should feel like being able to do as if it was their data.

However, a careful note is needed here. As you will notice, I’m referring to data as “my data” or “their data.” This is very important to consider. If we want a good User Data Manifesto, we need to think clearly about what makes data, “User Data.”

The current version of the manifesto says that what makes User Data is data “created by the user.” But I think that’s misleading.

Usually, there are two ways in which one might refer to data as “their data” (i.e. “their own” data):

  1. Personal data, or personally-identifiable information, are often referred to by someone as their data. But in our case, that’s not relevant, this is covered by laws such as data protection in the European Union. That’s not the scope of this manifesto, because in this case the person is called the “data subject” and typically, this person is not necessarily a “user.”

    However, this is users that we are concerned with in this manifesto. Which leads to the second case in which one usually refers to data as their own data:

  2. Data that is stored on my hard-drive or other storage apparatus. In this case, the meaning of ownership of data is an extension of the ownership of the physical layer on which it sits.

    For instance, when I refer to the books that are in my private library at home, I say that these are my books even though I have not written any of them. I own these books not because I have created them, but because I bought them.

So, for the purpose of the User Data Manifesto, how should we define User Data to convey the objective that servers admins do not have the right to do as they wish with user data, i.e. our data?

I propose this:

“User data” means any data uploaded by a user and/or generated by a user, while using a service on the Internet.

This definition is aimed at replacing point 1 of the first version. This definition is consistent with our current way of referring to data as “our own data” but it also includes the case where data is not necessarily generated by devices that we own, but instead are generated by us, for us on devices that somebody else owns.

2 - Know where the data is stored
Everybody should be able to know: where their personal data is physically stored, how long, on which server, in what country, and what laws apply.

I have tried to improve this. This is point 2 in my version of the manifesto.

3 - Choose the storage location
Everybody should always be able to migrate their personal data to a different provider, server or their own machine at any time without being locked in to a specific vendor.

This is point 3 in my version of the manifesto.

4 - Control access
Everybody should be able to know, choose and control who has access to their own data to see or modify it.

5 - Choose the conditions
If someone chooses to share their own data, then the owner of the data selects the sharing license and conditions.

These two points are now point 1 in my version. I have merged them together. However, I have modified the part about “choosing the conditions” and instead refer to “permissions” (as in, read-only, read-write, etc.). I think the “conditions” as in licensing conditions are out of scope of this manifesto.

6 - Invulnerability of data
Everybody should be able to protect their own data against surveillance and to federate their own data for backups to prevent data loss or for any other reason.

This point was redundant with point 4 and it was drafted in a vague manner, so I have modified it and integrated in point 1 of my version of the manifesto.

7 - Use it optimally
Everybody should be able to access and use their own data at all times with any device they choose and in the most convenient and easiest way for them.

I feel this is not in scope with the manifesto because this describes a feature, not a right, and also because I felt it was a bit vague: what’s “most convenient and easiest way for them”? So I decided to leave this one out.

8 - Server software transparency
Server software should be free and open source software so that the source code of the software can be inspected to confirm that it works as specified.

This is about implementation related to point 3 of the current version related to the right to choose any location to store their data, the right to move to another platform. So I have merged it into point 3 of my version of the manifesto regarding the freedom to choose a platform.

That’s it. Overall, I think the manifesto was a good starting point and that it should be improved and updated. I think that we should reduce the number of points because 8 is too many; especially because some of them are redundant. We should also give more context after we lay out the rules.

This is what I have tried to do with my modifications. There is a pull request on Github pending. Feel free to give your impressions there.

Obviously, this is also a request for comments, criticism and improvement of my version of the manifesto.

Thanks to Jan-Christoph Borchardt, Maurice Verheesen, Okhin and Cryptie for their feedback and/or suggested improvements since April 2013.

My current proposal

User Data Manifesto, v2 DRAFT: as of today, August 26, 2014:

This manifesto aims at defining basic rights for people regarding their own data in the Internet age. People ought to be free and should not have to pay allegiance to service providers.

  1. “User data” means any data uploaded by a user and/or generated by a user, while using a service on the Internet.

Thus, users should have:

  1. Control over user data access

    Data explicitly and willingly uploaded by a user should always be under the ultimate control of the user. Users should be able to decide whom to grant (direct) access to their data and under which permissions such access should occur.

    Cryptography (e.g. a PKI) is necessary to enable this control.

    Data received, generated, collected and/or constructed from users’ online activity while using the service (e.g. metadata or social graph data) should be made accessible to these users and put under their control. If this control can’t be given, than this type of data should be anonymous and not stored for long periods.

  2. Knowledge of how the data is stored

    When the data is uploaded to a specific service provider, users should be able to know where that specific service provider stores the data, how long, in which jurisdiction the specific service provider operates, and which laws apply.

    A solution would be, that all users are free to choose to store their own data on devices (e.g. servers) in their vicinity and under their direct control. This way, users do not have to rely on centralised services. The use of peer-to-peer systems and unhosted apps are a means to that end.

  3. Freedom to choose a platform

    Users should always be able to extract their data from the service at any time without experiencing any vendor lock-in.

    Open standards for formats and protocols, as well as access to the programs source code under a Free Software license are necessary to guarantee this.

If users have these rights, they are in control of their data rather than being subjugated by service providers.

Many services that deal with user data at the moment are gratis, but that does not mean they are free. Instead of paying with money, users are paying with their allegiance to the service providers so that they can exploit user data (e.g. by selling them or building a profile for advertisers).

Surrendering privacy in this way may seem to many people a trivial thing and a small price to pay for the sake of convenience that the Internet services brings. This has made this kind of exchange to become common.

Service providers have thus been unwittingly compelled to turn their valuable Internet services into massive and centralised surveillance systems. It is of grave importance that people understand/realize this, since it forms a serious threat to the freedom of humanity

When users control access to the data they upload (Right #1), it means that data intended to be privately shared should not be accessible to the service provider, nor shared with governments. Users should be the only ones to have ultimate control over it and to grant access to it. Thus, a service should not force you to disclose private data (including private correspondence) with them.

That means the right to use cryptography1 should never be denied. On the contrary, cryptography should be enabled by default and be put under the users’ control with Free Software that is easy to use.

Some services allow users to submit data with the intention to make it publicly available for all. Even in these cases, some amount of user data is kept private (e.g. metadata or social graph data). The user should also have control over this data, because metadata or logging information can be used for unfair surveillance. Service providers must commit to keeping these to a minimum, and only for the purpose of operating the service.

When users make data available to others, whether to a restrictive group of people or to large groups, they should be able to decide under which permissions they grant access to this data. However, this right is not absolute and should not extend over others’ rights to use the data once it has been made available to them. What’s more, it does not mean that users should have the right to impose unfair restrictions to other people.

Ultimately, to ensure that user data is under the users’ control, the best technical designs include peer-to-peer or distributed systems, and unhosted applications. Legally, that means terms of service should respect users’ rights.

When users use centralised services that uploads data to specific storage providers instead of relying on peer-to-peer systems, it is important to know where the providers might store data because they could be compelled by governments to turn over data they have in their possession (Right #2).

In the long term, all users should have their own server. Unfortunately, this is made very difficult by some Internet access providers that restrict their customers unfairly. Also, being your own service provider often means having to administer systems which require expertise and time that most people currently don’t have or are willing to invest.

Users should not get stuck into a specific technical solution. This is why they should always be able to leave a platform and settle elsewhere (Right #3). It means users should be able to have their data in an open format, and to exchange information with an open protocol. Open standards are standards that are free of copyright and patent constraints. Obviously, without the source code of the programs used to deal with user data, this is impractical. This is why programs should be distributed under a Free Software license like the GNU AGPL-32.

Thanks to Sam Tuke for his feedback on the post and the manifesto!

  1. We mean effective cryptography. If the service provider enables cryptography but controls the keys or encrypts the data with your password, it’s probably snake oil. ↩

  2. The GNU AGPL-3 safeguards this right by making it a legal obligation to provide access to the modified program run by the service provider. (§ 13. Remote Network Interaction) ↩

Monday, 25 August 2014

How to contribute to the KDE Frameworks Cookbook

Creative Destruction & Me » FLOSS | 07:00, Monday, 25 August 2014

The annual KDE Randa Meeting, in itself already shock-ful of awesome, this year hosted the KDE Frameworks Cookbook sprint. Valorie, Cornelius and I already wrote a lot about it. Plenty of attention went into finding the place for the cookbook between the getting-started HOWTOs on KDE Techbase and the full-blown API documentation. Not surprisingly, there is a space and a good purpose for the book. Frameworks developers and maintainer have had to deal with the question of where to put introductions that segue newcomers into how to use the modules many times, and so far, the answer have been unsatisfactory. Newcomers only find the API documentation when they already know about a framework, and TechBase is a great resource for developers, but not necessarily a good introduction. What is missing is a good way to help and learn about what KDE Frameworks have to offer. So there is the purpose of the KDE Frameworks Cookbook – to help developers find and learn about the right tools for the problems they need to solve (and also, consumable on a e-book reader by the pool). For developers and maintainers, this means they need to know how to add sections to the book that cover this information about their frameworks. These tools and workflows will be explained in this post.

Im a way, the book will partly provide an alternative way to consume the content provided by KDE TechBase. Because of that, the HTML version of the book will integrate and cross-link with TechBase. The preferences of what kind of documentation should be in the book or on TechBase are not yet written in stone, and will probably develop over time. The beauty of Free Software is that it also does not matter much – the content is there and may be mixed and remixed as needed.

Two principles have been applied when setting up the tooling for the KDE Framworks Cookbook. The first is that content should stay in the individual frameworks repositories as much as possible. The second is that content taken from the frameworks, like code snippets, shall not be duplicated into the book, but rather referenced and extracted at build time.

KDE Frameworks Cookbook front cover

Keeping content that is part of the book in the frameworks repositories makes it easier for developers and maintainers to contribute to it. A book can grow to ginormous proportions, and keeping track of where its text is related to a specific framework or piece of code will be difficult if the two are separated into different places. However, content that is not specific to individual frameworks may as well be placed in the book repository. Considering that contributions of code and prose are voluntary and compete for the available time of the people working on it, it is important to keep the workflow simple, straightforward and integrated with that of development. Frameworks authors add sections to the book by placing Markdown formatted texts in the framework’s repository. The repository for the book (kde:kf5book) references the frameworks repositories that provide content as Git submodules, and defines the order in which the content is included using a CMake based build system. The target formats of the book, currently PDF, HTML and ePub, are generated using pandoc. Pandoc can also be used by the contributors to preview the text they have written and check it for correct formatting. The book repository already contains various sections pulled in from the frameworks repositories this ways. Interested contributors will probably find it easiest to follow the existing examples for the submodule setup and the build instructions in the CMakeLists.txt file to add their content. The ThreadWeaver repository (kde:threadweaver) contains Markdown files that are part of the cookbook in it’s examples/ folder which can serve as a reference. See below for why the file names end in

Avoiding duplication by copy-pasting code into the book is achieved by using a special markup for code snippets and other examples and a separate tool to extract them. Especially example and unit test code that is shown in the book should always be part of the regular, CI tested build of the respective framework. This ensures that all code samples shown in the book actually compile and hopefully work for the reader. The snippetextractor tool processes the input files that only contain references to the code samples and produces complete Markdown files that include the samples verbatim. The input file names end in The conversion of the input files is handled by the build system of the kf5book repository, not in the individual frameworks repositories. It is however possible to add steps to produce the final Markdown files to the CMake build files of the repository. This will catch markup errors of snippets during the frameworks build, but does require the snippetextractor tool to be installed.

Setting up continuous builds for the book target formats is currently being worked on. Producing the book will be integrated into KDE’s Jenkins CI, and up-to-date version of it will be available on KDE TechBase. Until then, curious readers can self-produce the book:

  • Install pandoc and the necessary Latex packages to produce PDF output.
  • Build and install snippetextractor using QMake and a recent (>5.2) Qt. Make sure it is in the path before running CMake in the later steps.
  • Clone kde:kf5book, and initialize the Git submodules as described in the README file.
  • In a build directory, run cmake <source directory> and make to produce the book.

Enjoy reading!

Filed under: Coding, CreativeDestruction, English, FLOSS, KDE, OSS, Qt Tagged: Akademy, FLOSS, free software communities, KDE, kde community

Saturday, 23 August 2014

Want to be selected for Google Summer of Code 2015? - fsfe | 11:37, Saturday, 23 August 2014

I've mentored a number of students in 2013 and 2014 for Debian and Ganglia and most of the companies I've worked with have run internships and graduate programs from time to time. GSoC 2014 has just finished and with all the excitement, many students are already asking what they can do to prepare and be selected in 2015.

My own observation is that the more time the organization has to get to know the student, the more confident they can be selecting that student. Furthermore, the more time that the student has spent getting to know the free software community, the more easily they can complete GSoC.

Here I present a list of things that students can do to maximize their chance of selection and career opportunities at the same time. These tips are useful for people applying for GSoC itself and related programs such as GNOME's Outreach Program for Women or graduate placements in companies.


There is no guarantee that Google will run the program again in 2015 or any future year.

There is no guarantee that any organization or mentor (including myself) will be involved until the official list of organizations is published by Google.

Do not follow the advice of web sites that invite you to send pizza or anything else of value to prospective mentors.

Following the steps in this page doesn't guarantee selection. That said, people who do follow these steps are much more likely to be considered and interviewed than somebody who hasn't done any of the things in this list.

Understand what free software really is

You may hear terms like free software and open source software used interchangeably.

They don't mean exactly the same thing and many people use the term free software for the wrong things. Not all open source projects meet the definition of free software. Those that don't, usually as a result of deficiencies in their licenses, are fundamentally incompatible with the majority of software that does use genuinely free licenses.

Google Summer of Code is about both writing and publishing your code and it is also about community. It is fundamental that you know the basics of licensing and how to choose a free license that empowers the community to collaborate on your code well after GSoC has finished.

Please review the definition of free software early on and come back and review it from time to time. The The GNU Project / Free Software Foundation have excellent resources to help you understand what a free software license is and how it works to maximize community collaboration.

Don't look for shortcuts

There is no shortcut to GSoC selection and there is no shortcut to GSoC completion.

The student stipend (USD $5,500 in 2014) is not paid to students unless they complete a minimum amount of valid code. This means that even if a student did find some shortcut to selection, it is unlikely they would be paid without completing meaningful work.

If you are the right candidate for GSoC, you will not need a shortcut anyway. Are you the sort of person who can't leave a coding problem until you really feel it is fixed, even if you keep going all night? Have you ever woken up in the night with a dream about writing code still in your head? Do you become irritated by tedious or repetitive tasks and often think of ways to write code to eliminate such tasks? Does your family get cross with you because you take your laptop to Christmas dinner or some other significant occasion and start coding? If some of these statements summarize the way you think or feel you are probably a natural fit for GSoC.

An opportunity money can't buy

The GSoC stipend will not make you rich. It is intended to make sure you have enough money to survive through the summer and focus on your project. Professional developers make this much money in a week in leading business centers like New York, London and Singapore. When you get to that stage in 3-5 years, you will not even be thinking about exactly how much you made during internships.

GSoC gives you an edge over other internships because it involves publicly promoting your work. Many companies still try to hide the potential of their best recruits for fear they will be poached or that they will be able to demand higher salaries. Everything you complete in GSoC is intended to be published and you get full credit for it. Imagine a young musician getting the opportunity to perform on the main stage at a rock festival. This is how the free software community works. It is a meritocracy and there is nobody to hold you back.

Having a portfolio of free software that you have created or collaborated on and a wide network of professional contacts that you develop before, during and after GSoC will continue to pay you back for years to come. While other graduates are being screened through group interviews and testing days run by employers, people with a track record in a free software project often find they go straight to the final interview round.

Register your domain name and make a permanent email address

Free software is all about community and collaboration. Register your own domain name as this will become a focal point for your work and for people to get to know you as you become part of the community.

This is sound advice for anybody working in IT, not just programmers. It gives the impression that you are confident and have a long term interest in a technology career.

Choosing the provider: as a minimum, you want a provider that offers DNS management, static web site hosting, email forwarding and XMPP services all linked to your domain. You do not need to choose the provider that is linked to your internet connection at home and that is often not the best choice anyway. The XMPP foundation maintains a list of providers known to support XMPP.

Create an email address within your domain name. The most basic domain hosting providers will let you forward the email address to a webmail or university email account of your choice. Configure your webmail to send replies using your personalized email address in the From header.

Update your ~/.gitconfig file to use your personalized email address in your Git commits.

Create a web site and blog

Start writing a blog. Host it using your domain name.

Some people blog every day, other people just blog once every two or three months.

Create links from your web site to your other profiles, such as a Github profile page. This helps reinforce the pages/profiles that are genuinely related to you and avoid confusion with the pages of other developers.

Many mentors are keen to see their students writing a weekly report on a blog during GSoC so starting a blog now gives you a head start. Mentors look at blogs during the selection process to try and gain insight into which topics a student is most suitable for.

Create a profile on Github

Github is one of the most widely used software development web sites. Github makes it quick and easy for you to publish your work and collaborate on the work of other people. Create an account today and get in the habbit of forking other projects, improving them, committing your changes and pushing the work back into your Github account.

Github will quickly build a profile of your commits and this allows mentors to see and understand your interests and your strengths.

In your Github profile, add a link to your web site/blog and make sure the email address you are using for Git commits (in the ~/.gitconfig file) is based on your personal domain.

Start using PGP

Pretty Good Privacy (PGP) is the industry standard in protecting your identity online. All serious free software projects use PGP to sign tags in Git, to sign official emails and to sign official release files.

The most common way to start using PGP is with the GnuPG (GNU Privacy Guard) utility. It is installed by the package manager on most Linux systems.

When you create your own PGP key, use the email address involving your domain name. This is the most permanent and stable solution.

Print your key fingerprint using the gpg-key2ps command, it is in the signing-party package on most Linux systems. Keep copies of the fingerprint slips with you.

This is what my own PGP fingerprint slip looks like. You can also print the key fingerprint on a business card for a more professional look.

Using PGP, it is recommend that you sign any important messages you send but you do not have to encrypt the messages you send, especially if some of the people you send messages to (like family and friends) do not yet have the PGP software to decrypt them.

If using the Thunderbird (Icedove) email client from Mozilla, you can easily send signed messages and validate the messages you receive using the Enigmail plugin.

Get your PGP key signed

Once you have a PGP key, you will need to find other developers to sign it. For people I mentor personally in GSoC, I'm keen to see that you try and find another Debian Developer in your area to sign your key as early as possible.

Free software events

Try and find all the free software events in your area in the months between now and the end of the next Google Summer of Code season. Aim to attend at least two of them before GSoC.

Look closely at the schedules and find out about the individual speakers, the companies and the free software projects that are participating. For events that span more than one day, find out about the dinners, pub nights and other social parts of the event.

Try and identify people who will attend the event who have been GSoC mentors or who intend to be. Contact them before the event, if you are keen to work on something in their domain they may be able to make time to discuss it with you in person.

Take your PGP fingerprint slips. Even if you don't participate in a formal key-signing party at the event, you will still find some developers to sign your PGP key individually. You must take a photo ID document (such as your passport) for the other developer to check the name on your fingerprint but you do not give them a copy of the ID document.

Events come in all shapes and sizes. FOSDEM is an example of one of the bigger events in Europe, is a similarly large event in Australia. There are many, many more local events such as the Debian France mini-DebConf in Lyon, 2015. Many events are either free or free for students but please check carefully if there is a requirement to register before attending.

On your blog, discuss which events you are attending and which sessions interest you. Write a blog during or after the event too, including photos.

Quantcast generously hosted the Ganglia community meeting in San Francisco, October 2013. We had a wild time in their offices with mini-scooters, burgers, beers and the Ganglia book. That's me on the pink mini-scooter and Bernard Li, one of the other Ganglia GSoC 2014 admins is on the right.

Install Linux

GSoC is fundamentally about free software. Linux is to free software what a tree is to the forest. Using Linux every day on your personal computer dramatically increases your ability to interact with the free software community and increases the number of potential GSoC projects that you can participate in.

This is not to say that people using Mac OS or Windows are unwelcome. I have worked with some great developers who were not Linux users. Linux gives you an edge though and the best time to gain that edge is now, while you are a student and well before you apply for GSoC.

If you must run Windows for some applications used in your course, it will run just fine in a virtual machine using Virtual Box, a free software solution for desktop virtualization. Use Linux as the primary operating system.

Here are links to download ISO DVD (and CD) images for some of the main Linux distributions:

If you are nervous about getting started with Linux, install it on a spare PC or in a virtual machine before you install it on your main PC or laptop. Linux is much less demanding on the hardware than Windows so you can easily run it on a machine that is 5-10 years old. Having just 4GB of RAM and 20GB of hard disk is usually more than enough for a basic graphical desktop environment although having better hardware makes it faster.

Your experiences installing and running Linux, especially if it requires some special effort to make it work with some of your hardware, make interesting topics for your blog.

Decide which technologies you know best

Personally, I have mentored students working with C, C++, Java, Python and JavaScript/HTML5.

In a GSoC program, you will typically do most of your work in just one of these languages.

From the outset, decide which language you will focus on and do everything you can to improve your competence with that language. For example, if you have already used Java in most of your course, plan on using Java in GSoC and make sure you read Effective Java (2nd Edition) by Joshua Bloch.

Decide which themes appeal to you

Find a topic that has long-term appeal for you. Maybe the topic relates to your course or maybe you already know what type of company you would like to work in.

Here is a list of some topics and some of the relevant software projects:

  • System administration, servers and networking: consider projects involving monitoring, automation, packaging. Ganglia is a great community to get involved with and you will encounter the Ganglia software in many large companies and academic/research networks. Contributing to a Linux distribution like Debian or Fedora packaging is another great way to get into system administration.
  • Desktop and user interface: consider projects involving window managers and desktop tools or adding to the user interface of just about any other software.
  • Big data and data science: this can apply to just about any other theme. For example, data science techniques are frequently used now to improve system administration.
  • Business and accounting: consider accounting, CRM and ERP software.
  • Finance and trading: consider projects like R, market data software like OpenMAMA and connectivity software (Apache Camel)
  • Real-time communication (RTC), VoIP, webcam and chat: look at the JSCommunicator or the Jitsi project
  • Web (JavaScript, HTML5): look at the JSCommunicator

Before the GSoC application process begins, you should aim to learn as much as possible about the theme you prefer and also gain practical experience using the software relating to that theme. For example, if you are attracted to the business and accounting theme, install the PostBooks suite and get to know it. Maybe you know somebody who runs a small business: help them to upgrade to PostBooks and use it to prepare some reports.

Make something

Make some small project, less than two week's work, to demonstrate your skills. It is important to make something that somebody will use for a practical purpose, this will help you gain experience communicating with other users through Github.

For an example, see the servlet Juliana Louback created for fixing phone numbers in December 2013. It has since been used as part of the Lumicall web site and Juliana was selected for a GSoC 2014 project with Debian.

There is no better way to demonstrate to a prospective mentor that you are ready for GSoC than by completing and publishing some small project like this yourself. If you don't have any immediate project ideas, many developers will also be able to give you tips on small projects like this that you can attempt, just come and ask us on one of the mailing lists.

Ideally, the project will be something that you would use anyway even if you do not end up participating in GSoC. Such projects are the most motivating and rewarding and usually end up becoming an example of your best work. To continue the example of somebody with a preference for business and accounting software, a small project you might create is a plugin or extension for PostBooks.

Getting to know prospective mentors

Many web sites provide useful information about the developers who contribute to free software projects. Some of these developers may be willing to be a GSoC mentor.

For example, look through some of the following:

Getting on the mentor's shortlist

Once you have identified projects that are interesting to you and developers who work on those projects, it is important to get yourself on the developer's shortlist.

Basically, the shortlist is a list of all students who the developer believes can complete the project. If I feel that a student is unlikely to complete a project or if I don't have enough information to judge a student's probability of success, that student will not be on my shortlist.

If I don't have any student on my shortlist, then a project will not go ahead at all. If there are multiple students on the shortlist, then I will be looking more closely at each of them to try and work out who is the best match.

One way to get a developer's attention is to look at bug reports they have created. Github makes it easy to see complaints or bug reports they have made about their own projects or other projects they depend on. Another way to do this is to search through their code for strings like FIXME and TODO. Projects with standalone bug trackers like the Debian bug tracker also provide an easy way to search for bug reports that a specific person has created or commented on.

Once you find some relevant bug reports, email the developer. Ask if anybody else is working on those issues. Try and start with an issue that is particularly easy and where the solution is interesting for you. This will help you learn to compile and test the program before you try to fix any more complicated bugs. It may even be something you can work on as part of your academic program.

Find successful projects from the previous year

Contact organizations and ask them which GSoC projects were most successful. In many organizations, you can find the past students' project plans and their final reports published on the web. Read through the plans submitted by the students who were chosen. Then read through the final reports by the same students and see how they compare to the original plans.

Start building your project proposal now

Don't wait for the application period to begin. Start writing a project proposal now.

When writing a proposal, it is important to include several things:

  • Think big: what is the goal at the end of the project? Does your work help the greater good in some way, such as increasing the market share of Linux on the desktop?
  • Details: what are specific challenges? What tools will you use?
  • Time management: what will you do each week? Are there weeks where you will not work on GSoC due to vacation or other events? These things are permitted but they must be in your plan if you know them in advance. If an accident or death in the family cut a week out of your GSoC project, which work would you skip and would your project still be useful without that? Having two weeks of flexible time in your plan makes it more resilient against interruptions.
  • Communication: are you on mailing lists, IRC and XMPP chat? Will you make a weekly report on your blog?
  • Users: who will benefit from your work?
  • Testing: who will test and validate your work throughout the project? Ideally, this should involve more than just the mentor.

If your project plan is good enough, could you put it on Kickstarter or another crowdfunding site? This is a good test of whether or not a project is going to be supported by a GSoC mentor.

Learn about packaging and distributing software

Packaging is a vital part of the free software lifecycle. It is very easy to upload a project to Github but it takes more effort to have it become an official package in systems like Debian, Fedora and Ubuntu.

Packaging and the communities around Linux distributions help you reach out to users of your software and get valuable feedback and new contributors. This boosts the impact of your work.

To start with, you may want to help the maintainer of an existing package. Debian packaging teams are existing communities that work in a team and welcome new contributors. The Debian Mentors initiative is another great starting place. In the Fedora world, the place to start may be in one of the Special Interest Groups (SIGs).

Think from the mentor's perspective

After the application deadline, mentors have just 2 or 3 weeks to choose the students. This is actually not a lot of time to be certain if a particular student is capable of completing a project. If the student has a published history of free software activity, the mentor feels a lot more confident about choosing the student.

Some mentors have more than one good student while other mentors receive no applications from capable students. In this situation, it is very common for mentors to send each other details of students who may be suitable. Once again, if a student has a good Github profile and a blog, it is much easier for mentors to try and match that student with another project.

GSoC logo generic


Getting into the world of software engineering is much like joining any other profession or even joining a new hobby or sporting activity. If you run, you probably have various types of shoe and a running watch and you may even spend a couple of nights at the track each week. If you enjoy playing a musical instrument, you probably have a collection of sheet music, accessories for your instrument and you may even aspire to build a recording studio in your garage (or you probably know somebody else who already did that).

The things listed on this page will not just help you walk the walk and talk the talk of a software developer, they will put you on a track to being one of the leaders. If you look over the profiles of other software developers on the Internet, you will find they are doing most of the things on this page already. Even if you are not selected for GSoC at all or decide not to apply, working through the steps on this page will help you clarify your own ideas about your career and help you make new friends in the software engineering community.

Tuesday, 19 August 2014

How to generate a password with special characters

André on Free Software » English | 21:53, Tuesday, 19 August 2014

After Matthias showed me how to generate a password, I wanted to be able to create one with special characters. This way you don’t have to add them later to the password yourself.

I started with pwgen, that you can install in the command line with:

apt-get install pwgen

If you run pwgen, you generate random passwords like:


The passwords are all made up of 8 characters and you get 160 of them at once.

If you want to enforce numbers in the passwords, you type in the command line -n:

pwgen -n

If you want to have at least a capital letter in every password, you type -c:

pwgen -c

If you want to generate completely random passwords, you type -s:

pwgen -s

And if you want to have one with at least one special symbol in every password you choose -y:

pwgen -y

So if you add everything up, if you type:

pwgen -c -n -s -y

you can make your own password from a result like this:

7?|%Wr0! \xXJk7Mp OY=CD@2i !0;I.,\a j2%aFf5: {GIBK4nZ O’_73K>8 P.1@Nm2e
9y(<bG{Z B)db4(H# /iYy”?0) Yc6/OJ/& 5It&=,>8 \n6#F)%N 8+@nljiF M*H5?<nq
#9I4LEk\ S’h-e0Ax 8lEw’v?y w3n,y,iQ FBk$w7or $p^W9[/| #7eA|D8f 2[ACJDv+
q\s.70Do 7!)]}QmA rU!RdIA8 7p@K?3cD 7=/~’Rhe ;{2OCqYn z9>+”N]l 6UYz!]q[
/3UB_{)@ ]_P]8M#4 P]1t0t?# xT~3HzOh :c~A9RA{ %S?X?2cf E7{>uO(_ T)^=1>AX
Q3.Ez!N\ m0M`m3x$ 6WY=z-x2 H’W)$_98 9″V2.+$S xEAv5`~n GR$Zz:|7 |W3AqXHe
gE`rSMU4 fKa*HLs8 &kQ~s0<i eq)U_8.W %|rD6n+S 5XBV’38k ;9(*AMzB ,u’6IU@2
U`G2F`y. hd05BEg? [26T&U#+ dj1&7tdX V+i>:a32 c^.RGNh1 ^U"3XVR[ eZnc!a56
otr2J*0U VQO;{^S4 rd)E\z7I pMn,xHx6 H%`dUu1? 0_<LKv<# f8&Y)U=\ 1D~d|#er
`ylKnp@8 c+CL`/57 sI7J|}_[ z46Vm>9Z !."rT?q1 {0.}"!Hf i]<-=0Rz Sw&0pB<V
RT4snTs- WEl,35uS g<u?UM7/ :g1c.F”q “&JrK1Cb 88%2#’xJ }Z;”.”2E “b`LI3Q(
,3QH8LW’ 7zU_+\i- 1~!Be-sH nr6!J$R| vp65Ha/H 8Q?&uy#| ‘K/m1NU% 4CB5)(Ln
z@=0tAr# 5S={\|(Z g?o4A<3| =7Azd:AQ Qsl8HE`d Z%-;0Ob3 .N8′fcoI Og_5JB4J
>6feWQ%$ SP1D,7c” N~6N}jmm )q|w7pvI Mpo5=R%X }F2|bnPC q0rN{pAx ;&’,.6iL
~rZ5,,’u alg=Z1y@ +],jk4B3 L+”nfQ5i ]p`7l_O% “-L<6rvz AJ+’3!uW 1bj>Ps}6
[}o1Y8)& cZ12'+={ 4YN[}lKN BjyO\X5l 4rxw|s/U (>QnS&7i 0z(3n>U0 4|=!XC=B
^9v+mP]# >4NzKd8Q x/S3u5G. ‘[I|2*Uw C}2'1Z9r !An2F;s0 `;8ZZ.6R 8!mx>aDH
<N!4un!+ ihg<o2TB 1~'^n@yQ T5WY2kO` A(9|_\iJ C7e_GZLm :0sYS_=s D=nUJ3PN
~72J#}>7 /W!_ZB2\ xZ5E>aA= =F!k\@!0 ./kDOPZ2 ]M|g3x>; )%|,&AY4 HP}V2bg{
_B”^)H)8 (r+{Q&5x 65Tq/*`f q)Ci722] !QATS’$4 zIMIAN7\ 7″7Py93s “xdYe5@<

Monday, 18 August 2014

Is WebRTC private? - fsfe | 19:55, Monday, 18 August 2014

With the exciting developments at, many people are starting to look more closely at browser-based real-time communications.

Some have dared to ask: does it solve the privacy problems of existing solutions?

Privacy is a relative term

Perfect privacy and its technical manifestations are hard to define. I had a go at it in a blog on the Gold Standard for free communications technology on 5 June 2013. By pure co-incidence, a few hours later, the first Snowden leaks appeared and this particular human right was suddenly thrust into the spotlight.

WebRTC and ICE privacy risk

WebRTC does not give you perfect privacy.

At least one astute observer at my session at Paris mini-DebConf 2014 questioned the privacy of Interactive Connectivity Establishment (ICE, RFC 5245).

In its most basic form, ICE scans all the local IP addresses on your machine and NAT gateway and sends them to the person calling you so that their phone can find the optimal path to contact you. This clearly has privacy implications as a caller can work out which ISP you are connected to and some rough details of your network topology at any given moment in time.

What WebRTC does bring to the table

Some of this can be mitigated though: an ICE implementation can be tuned so that it only advertises the IP address of a dedicated relay host. If you can afford a little latency, your privacy is safe again. This privacy protecting initiative could be made by a browser vendor such as Mozilla or it can be done in JavaScript by a softphone such as JSCommunicator.

Many individuals are now using a proprietary softphone to talk to family and friends around the world. The softphone in question has properties like a virus, siphoning away your private information. This proprietary softphone is also an insidious threat to open source and free operating systems on the desktop. WebRTC is a positive step back from the brink. It gives people a choice.

WebRTC is a particularly relevant choice for business. Can you imagine going to a business and asking them to make all their email communication through hotmail? When a business starts using a particular proprietary softphone, how is it any different? WebRTC offers a solution that is actually easier for the user and can be secured back to the business network using TLS.

WebRTC is based on open standards, particularly HTML5. Leading implementations, such as the SIP over WebSocket support in reSIProcate, JSCommunicator and the DruCall module for Drupal are fully open source. Not only is it great to be free, it is possible to extend and customize any of these components.

What is missing

There are some things that are not quite there yet and require a serious effort from the browser vendors. At the top of the list for privacy:

  • ZRTP support - browsers currently support DTLS-SRTP, which is based on X.509. ZRTP is more like PGP, a democratic and distributed peer-to-peer privacy solution without needing to trust some central certificate authority.
  • TLS with PGP - the TLS protocol used to secure the WebSocket signalling channel is also based on X.509 with the risk of a central certificate authority. There is increasing chatter about the need for TLS to use PGP instead of X.509 and WebRTC would be a big winner if this were to eventuate and be combined with ZRTP.

You may think "I'll believe it when I see it". Each of these features, including WebRTC itself, is a piece of the puzzle and even solving one piece at a time brings people further out of danger from the proprietary mess the world lives with today.

To find out more about practical integration of WebRTC into free software solutions, consider coming to my talk at xTupleCon in October.

Interesting new development in Slovakia: Fight for Open Standards continues on new battleground

Matej's blog » FSFE | 13:05, Monday, 18 August 2014

Readers, who are following FSFE’s work for longer time must be familiar with our EURA vs. Slovak Tax Authority campaign. Today, it seems, Slovakia still doesn’t care about Open Standards. While other EU Member States are working towards establishing a practice of using Open Standards and encourage usage of Free Software (see latest news about Great Britain or Spain), Slovakia, it seems, is  – still, more than 2 years after the EURA scandal began – struggling with this issue.

Land of the (un)free

The present problem was created by adoption of new legislation, concerning transfer of agricultural land. According to new legal act (see §4(3), only in Slovak), people who own agricultural land and want to sell it must first make an offer on web page of Ministry of Agriculture (with some exceptions, but majority of the owners must follow this procedure). This is where the trouble begins. In order to submit an offer to the Ministry’s web page, you need to use additional software. And, as you could guessed – this software is only available for Windows OS. The problem is even more severe, if we realize, that this is the only way how to submit the offer. There is no paper form to fill, owners of land must do this electronically. This effectively means, that users of other operating systems (like GNU/Linux but also other proprietary OS) cannot comply with their legal obligation. Only way is to get a computer with the single supported OS. Sounds familiar? Yes, this is  EURA situation all over again. This is state supporting technological lock-in for one kind of product again.

And more than that. Ministry’s (but not only, this problem extents throughout the whole government, as you’ll see below) behavior is not only unacceptable, but outright unlawful. Since already 2008, there is a binding regulation, issued by Ministry of Finance. This regulation sets technical standards for government’s information systems. Current version of this regulation states in it’s annex, point 8.11. -

"Neither code nor content of the web page shall presume or request, that user must be using specific operating system, specific browser, active sound output or similar measure [author's translation]"

Website of Ministry clearly doesn’t comply with this provision. Notwithstanding that it is also in contradiction to European Interoperability Framework.

Luckily, thanks to a dedicated work of our former colleague and, at the present time, a Legal Counsel for Slovak non-profit organization European Information Society Institute (EISi) – Martin Husovec – this should be a history soon. EISi’s legal team decided to address this issue and now they are calling upon Ministry to end this practice. Few days ago they sent a letter to Ministry explaining the situation and demanding a remedy. Ministry has now until the end of October – if they won’t comply with the letter and won’t provide an interoperable solution , EISi is not afraid to go to the court. I’m quiet happy to say, that EISi’s effort didn’t go unnoticed – it got attention of media, a coverage in national newspaper and support from public.

But wait! There’s more!

After a bit of research, I found out that the problem might be even more substantial. In December 2013, Slovakia introduced a new form of ID card – so-called “eID card”. This card serves as a standard ID with one new improvement. It contains a chip, thanks to which citizens can communicate electronically with authorities and use other “e-Government” services. All you need is a USB cable and computer. And where’s the catch? Of course, not hard to guess. In order to use your eID card you need to install a client and drivers, which are only available for one platform – Windows OS. This is basically the very same issue as was described in previous paragraphs. Website of Ministry of Agriculture requires using of this software in order to submit an offer for selling land.

This means, that users of other OS like GNU/Linux are not only excluded from possibility to sell agricultural land, but they also cannot use any other services offered by eID card. eID card is useless for this group of users – they cannot use the advantages of e-Government. Frankly, users of Free Software are usually the most technically savvy and that means they are exactly the perfect candidates for being active “testers” of new technology like eID cards.

Slovak authorities are very well aware of this issue. According to Slovak news sources, the eID system was supposed to be interoperable in the first quarter of 2014. Latest explanation was offered by agency NASES, which is responsible for making the whole system interoperable. They say they need more time , because the complexity, safety and non-homogeneous nature of GNU/Linux operating systems requires more testing. This eID card project was launched in December 2013… How much more time do they need? Why do other OS get behind for more than 8 months?

Stay tuned for more! There’s also update scheduled for EURA vs Slovak Tax Authorities campaign status.

More info here (Only in Slovak, sorry, no English sources available):

- Press Release by EISi – EISi žiada Ministerstvo pôdohospodárstva aby prestalo porušovať predpisy o štandardoch (
- National news coverage (paywall) – – Jahnátkovi a spol. hrozí žaloba (
- – Právnici sa chystajú zažalovať ministerstvo kvôli vyžadovaniu Windows, 13.8.2014 (
- -Softvér pre využitie občianskych s čipom na Linuxe a OS X sa opäť posúva, 2.5.2014 (

Saturday, 16 August 2014

WebRTC: what works, what doesn't - fsfe | 15:49, Saturday, 16 August 2014

With the release of the latest portal update, there are numerous improvements but there are still some known problems too.

The good news is that if you have a web browser, you can probably make successful WebRTC calls from one developer to another without any need to install or configure anything else.

The bad news is that not every permutation of browser and client will work. Here I list some of the limitations so people won't waste time on them.

The SIP proxy supports any SIP client

Just about any SIP client can connect to the proxy server and register. This does not mean that every client will be able to call each other. Generally speaking, modern WebRTC clients will be able to call each other. Standalone softphones or deskphones will call each other. Calling from a normal softphone or deskphone to a WebRTC browser, or vice-versa, will not work though.

Some softphones, like Jitsi, have implemented most of the protocols to communicate with WebRTC but they are yet to put the finishing touches on it.

Chat should just work for any combination of clients

The new WebRTC frontend supports SIP chat messaging.

There is no presence or buddy list support yet.

You can even use a tool like sipsak to accept or send SIP chats from a script.

Chat works for any client new or old. Although a WebRTC user can't call a softphone user, for example, they can send chats to each other.

WebRTC support in Iceweasel 24 on wheezy systems is very limited

On a wheezy system, the most recent Iceweasel update is version 24.7.

This version supports most of WebRTC but does not support TURN relay servers to help you out of a NAT network.

If you call between two wheezy machines on the same NAT network it will work. If the call has to traverse a NAT boundary it will not work.

Wheezy users need to either download a newer Firefox version or use Chromium.

JsSIP doesn't handle ICE elegantly

Internet Connectivity Establishment (ICE, RFC 5245) is meant to prevent calls from being answered with missing audio or video streams.

ICE is a mandatory part of WebRTC.

When correctly implemented, the JavaScript application will exchange ICE candidates and run the connectivity checks before alerting anybody that a call is ringing. If the checks fail (for example, with Iceweasel 24 and NAT), the caller should be told the call can't be made and the callee shouldn't be disturbed at all.

JsSIP is not operating in this manner though. It alerts the callee before telling the browser to start the connectivity checks. Then it even waits for the callee to answer. Only then does it tell the browser to start checking connectivity. This is not a fault with the ICE standard or the browser, it is an implementation problem.

Therefore, until this is fully fixed, people may still see some calls that appear to answer but don't have any media stream. After this is fixed, such calls really will be a thing of the past.

Debian RTC testing is more than just a pipe dream

Although these glitches are not ideal for end users, there is a clear roadmap to resolve them.

There are also a growing collection of workarounds to minimize the inconvenience. For example, JSCommunicator has a hack to detect when somebody is using Iceweasel 24 and just refuse to make the call. See the option require_relay_candidate in the config.js settings file. This also ensures that it will refuse to make a call if the TURN server is offline. Better to give the user a clear error than a call without any audio or video stream.

require_relay_candidate is enabled on because it makes life easier for end users. It is not enabled on because some DDs may be willing to tolerate this issue when testing on a local LAN.

To find out more about practical integration of WebRTC into free software solutions, consider coming to my talk at xTupleCon in October.

The KDE Randa Meeting 2014 in retrospective

Creative Destruction & Me » FLOSS | 13:00, Saturday, 16 August 2014

Leaving Randa after spending a week there at the KDE Randa Meeting 2014 raises mixed feelings. I am really looking forward to coming home and seeing my family, but at the same time the week was so full of action, great collaboration and awesome people that it passed by in an instant and was over so soon. Carving a work week out of the schedule for a hackfest is not an easy feat, especially during summer school break, so the expectations were high. And they have been exceeded in all aspects. A lot of the credit for that goes to the organizer, Mario Fux, and his team of local supporters. The rest goes to the awesome family of KDE contributors that make spending a week on contributing to Free Software so much fun. And of course to the sponsors of the event.

Randa is a place that grows on you. As a big city dweller, I take pride in organizing my time like clockwork, and in fitting a gazillion things into one day. I sometimes pause when crossing the bridge to Friedrichstrasse station to enjoy the view, but only for a couple of seconds. Because I got stuff to do. As soon as I boarded the Glacier Express train from Visp to Randa, the last leg of the long journey from Berlin by rail, it became obvious that I was in a different place. The train travels slowly, so slowly that sometimes wanderers keep up next to it. Later I learned that it is known as the slowest fast train of the world. It makes up for the relaxed pace with the absolutely magnificent view of the Matter valley. The view of the mountains was so breathtaking it almost made me miss the Randa stop. I arrived at the guest house, boarded a small spartanic room and then joined the group of KDE folks that already had arrived. At first, there was still this nagging feeling that whenever I was idle for 5 minutes, it meant a lack of efficiency, and something had to be done about it. And then everything changed.

The Randa panorama

One day provided enough immersion into the monastery like setting to make the feeling of time ticking and the conveyor belt constantly advancing go away. That is the moment when I was made aware again of the amazing group of people that had gathered around me to work on KDE. Not just the fact that fifty people travelled halfway around the world to create software that is free, but also what kind of wonderful kind of people they are. The attendees were a mirror image of the KDE community at large – young and old, women and men, from dozens of different countries, with all sorts of beliefs, and a unifying passion to contribute to a common good. At a time when not a day passes without news about atrocities in the name of mundane details like the whose prophet is more prophet-like, imagine such a diverse group not just spending a week together without a minute of conflict, but devoting that time to build something, and then to give it away freely without discriminating by use or user. That is the spirit of Free Software for me, and it may explain why it means more than getting stuff for free.

Two year old news
2 year old news

So we went to work. The air was fresh, and there was no distraction (not even the internet, because it broke :-) ), and we spent our days alternating between coding, talking, eating, taking walks and sleeping. A number of “special interest groups” formed rather organically, to work on educational software, the KDE SDK, porting to KDE Frameworks 5, Gluon, KDEnlive, Amarok and the KDE Frameworks Cookbook. Every day at lunch, the groups reported on their progress. As it turnes out, the velocity of the team was quite impressive, even though there were no managers. Or because, who knows. There are plenty of blog posts and details about how the work progressed on the sprint page.

Swiss slow food. Delicious.
2 year old news 2 year old news

Speaking of lunch and food in general – a small band of local supporters catered to our every whim like coffee 24 hours a day and a fridge full of FreeBeer. With an incredible supportiveness and delicious swiss cuisine they helped to make this meeting possible. While they received multiple rounds of applause, I do not think we can thank them enough. Just like the work that Mario Fux does to organize these meetings is priceless. Personally, I am hugely grateful for their commitment, which made this meeting and the previous ones possible. And I very much hope that it will for the next one, and I will do my best to be there again. See you in Randa next year!

Filed under: Coding, CreativeDestruction, English, FLOSS, KDE, OSS, Qt

Thursday, 14 August 2014

Bug tracker or trouble ticket system? - fsfe | 05:04, Thursday, 14 August 2014

One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.

There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.

Support request or bug?

One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.

Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".

At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.

Will people use it?

Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.

Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.

Beyond Perl

Some of the most well known systems in this space are Bugzilla, Request Tracker and OTRS. All of these solutions are developed in Perl.

These days, Python, JavaScript/Node.JS and Java have taken more market share and Perl is chosen less frequently for new projects. Perl skills are declining and younger developers have usually encountered Python as their main scripting language at university.

My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.

Bugzilla has fallen out of the Debian and Ubuntu distributions after squeeze due to its complexity. In contrast, Fedora carries the Bugzilla packages and also uses it as their main bug tracker.


I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.

Some of the trends that appear:

  • Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
  • A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
  • Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
  • Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.


This leaves me with some of the following questions:

  • Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
  • For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
  • Which are more extendable with modern programming practices, such as Python scripting and using Git?
  • Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
  • Which of them are suitable for the public internet and which are only considered suitable for private access?

Wednesday, 13 August 2014

WebRTC in CRM/ERP solutions at xTupleCon 2014 - fsfe | 18:29, Wednesday, 13 August 2014

In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.

Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.

xTupleCon discounts for developers

xTuple has advised that they will offer a discount to other open source developers and contributers who wish to attend any part of their event. For details, please contact xTuple directly through this form. Please note it is getting close to their deadline for registration and discounted hotel bookings.

Potential WebRTC / JavaScript meet-up in Norfolk area

For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.

Fedora Flock 2014

nikos.roussos | 13:51, Wednesday, 13 August 2014

Fedora Flock took place last week and this is a log entry for my personal highlights.

Flock 2014 Overall the Flock was awesome. The quality of all technical presentations/workshops was really high. It's amazing how many things currently going on at the Fedora community, not just related to our Operation System (the distribution) but also innovative things that we develop or lead that in the long run benefit the whole Free Software community. As always I had the chance to meet, talk and collaborate in person with many Fedorians and that's always motivating for my contribution to the project.

So here it goes... This is currently the most important thing happening regarding the distribution. We are about to release Fedora 21 in three different products (Workstation, Cloud, Server) that will make possible to offer a better user experience in each one of these user groups. Some features that pop into my mind: Server product will implement "Server roles" right on the Anaconda installer, so you can quickly deploy (for instance) a mail server. Cockpit also will land on 21, an awesome server management tool. Cloud product will focus even more on containers (yes that means docker), open source infrastructure (eg. OpenStack) and cloud services (eg. AWS). Workstation product intended for Desktop users and will focus on developers and makers. DevAsistant will play a key role to this. It always surprise me to see developers struggle to setup their work environment on Operating Systems that takes hours to do it, for things that it's a few minutes work on Fedora.

<script async="async" charset="utf-8" src=""></script>

Docker, docker, docker: Aditya did a great introductory workshop during Flock. Fedora is definitely the leading platform for Docker. Next release will improve even better Docker's integration.

Ansible: Another DepOps area where Fedora community has given a lot of time and effort. Again Aditya did an introductory talk, since recently the Infrastructure team migrated everything from Puppet to Ansbile. The last day Praveen did a workshop demonstrating in practice how Ansible can be combined with Jenkins for Continuous Integration.

Packaging: One way that I contribute to the project is through RPM Packaging, so I tried to participate in most of the relevant talks/workshops. Amita Sharma walked us through the Fedora QA workflow, Jan Zeleny presented the roadmap for RPM and Dnf (the yum replacement), Haïkel Guémar coordinated a review package hackfest and Cole Robinson showed how packagers can utilize Virtualization tools for testing things.

Communications: New communication and collaboration tools are on the way. This is not directly related to the project, but it's Fedora people who drive the development on these. Hyperkitty will be the web interface for the upcoming Mailman3, Waarta is a web app for IRC/WebRTC and Glitter Gallery is a collaboration platform for designers which uses git as backend and SparkleShare as the sync client (I maintain the package for Fedora, so I'm really interested to see how this will go).

Novena: Sean Cross gave a keynote speech about the Novena project, the fully Open Source laptop. Still in the beginning, but seems really promising.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="383" mozallowfullscreen="mozallowfullscreen" msallowfullscreen="msallowfullscreen" oallowfullscreen="oallowfullscreen" src="" webkitallowfullscreen="webkitallowfullscreen" width="500"></iframe>

KDE Frameworks Book Sprint at the Randa Meeting 2014

Creative Destruction & Me » FLOSS | 00:00, Wednesday, 13 August 2014

A couple of weeks before the KDE Randa Meeting of 2014, the meeting’s organizer Mario Fux suggested to have a book sprint to help developers adopt the newly released KDE 5 Frameworks. In the Open Source spirit, the idea was not to start from scratch, but rather to collect the various bits and pieces of documentation that exist in different places in one location that offers itself more to a developer than before. Valorie Zimmermann stepped up to organize it (many thanks!), and a number of people volunteered to take part. After a week the project completely changed its orientation and struggled and found and also newly defined its place as a part of KDE’s documentation efforts. And it produced an initial version of the book, which is currently circulated on people’s ebook readers around here.

The first question to clarify and find concensus about was that of the audience of the book, and that also made an inherent conflict apparent. The target audience was defined as readers with programming experience in C++ and some Qt. Also, it is supposed to be useful to experienced KDE programmers as an easy way to jump into frameworks and programming mechanisms that they are not familiar with. And the inherent conclict was that the intended contributors to the book are those experienced KDE programmers, ideally the authors of the frameworks, and those do not have an itch to scratch by spending their time writing for newcomers. A rather convincing compromise for everybody is that the book will have fulfill a specific role, and that is to complete the spectrum of development documentation going from very basic questions (of which many today are better answered in the Qt Project forum) to cookbook style feature stories (this is the part that was missing) all the way to the already quite good API documentation.

Hack sprint fuel, the Swiss way
Hack sprint fuel, the Swiss way

The second question was that of tools. The flaky network was just the final blow to the expectation that the en vogue authoring tools (Booktype, Flossmanuals & co) would do the trick. They simply do not integrate well with the workflows of developers and a Free Software community in general, which is almost exclusively based on revision control and infrastructure that ties the pieces together. Information like text articles and code snippets are simply never supposed to be copy-pasted form one place to the next, as this would only create hard-to-spot errors and duplicate work. So while it sounded like a compromise to fall back to using a git repository of Markdown files in combination with pulling in the actual frameworks as submodules, it turned out to be the saving grace. There are still open questions, but the general approach is serving us well so far.

The last big question was that of how to integrate the book into the existing infrastructure. We decided to table a lot of these details, to be able to focus on producing content. In the end, the content is in simple to convert and integrate in all kinds of formats. The vision however is to have the production of the book in HTML, PDF and ePub formats integrated into CI, and the current version of the book available all the time. This also includes to decide in what way to revision and “release” the book. An option is to polish and copy-edit a new edition in the same cadence as the KDE software releases. This way, the common spirit of “let’s get it done and out there” could be beneficial to also keep the cookbook up to date. Let’s see.

If you are interested in jumoing in to help or to see what the current state of the book looks like, Valorie has all the details in her post.

Filed under: Coding, CreativeDestruction, English, FLOSS, KDE, OSS, Qt Tagged: Creative Destruction, defensive publications, FLOSS, free software communities, KDE, kde community, OIN

Monday, 11 August 2014

The student

André on Free Software » English | 18:49, Monday, 11 August 2014

If you live in a university city you sometimes have contact with a student.

Monday night. An athletic man stands in front of my door.

Good day. Our company of students is giving you the subscription card for your computer.

I say I didn’t know about this.

It’s only E 15,- a month. We will take care of your computer problems. Your whole neighborhood is invited to join as well.

I object and say I do not have computer problems.

You’re not having computer problems? What version of Windows are you on?

I say I run the free Debian GNU/Linux operating system.

The student is becoming angry and is going away. I close the door and make a back-up of my computer.


Adding gigabit ethernet to a WiFi-only laptop

the_unconventional's blog » English | 17:25, Monday, 11 August 2014

As many people will have probably read, I’ve been using an Acer C720 Chromebook with Debian for the last couple of months now. The C720 is a great compact Free Software machine, but due to its small dimensions, there is one often-needed feature missing: an ethernet port.

At first, I thought that I would manage with WiFi only, but as the weeks passed, it became more and more obvious that there are many situations in which one could really use an ethernet connection.

Obviously, I wouldn’t go with anything less than gigabit. After all, there’s a USB3-port available, so why would I settle for a 240~260Mbps bandwidth cap?

I came across the LevelOne USB-0401, a USB3 gigabit ethernet controller, which did exactly what I was looking for: turn the USB3 port into a gigabit ethernet port whenever I needed it.

The USB-0401 incorporates an ASIX AX88179 chipset, using the ax88179_178a Linux kernel driver. It works out of the box on Debian jessie using the Linux 3.14 kernel. Just plug it it and you’ll get an eth0 interface that you can configure with Network Manager.

usb 2-1: new SuperSpeed USB device number 3 using xhci_hcd
usb 2-1: New USB device found, idVendor=0b95, idProduct=1790
usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: AX88179
usb 2-1: Manufacturer: ASIX Elec. Corp.
ax88179_178a 2-1:1.0 eth0: register 'ax88179_178a' at usb-0000:00:14.0-1, ASIX AX88179 USB 3.0 Gigabit Ethernet
IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
ax88179_178a 2-1:1.0 eth0: ax88179 - Link status is: 1
IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

I’ve tested the connection for over two hours with my C720 connected directly to the Cisco gigabit switch upstairs, and the connection is very stable. Pings to my friend’s VPS are always around 10ms, and pings to my Raspberry Pi connected to the same switch easily stay below 1ms, even after hundreds of tries.

Various speed tests confirm that the adapter has no problems maxing out my 50/5Mbps DOCSIS connection, and pulling in large files over SFTP reaches speeds of around 60MB/s (480Mbps), which is probably bottlenecked by the little SSD’s maximal sustained write throughput and not the ethernet controller.

There’s not really much more to say. If you’re looking for a GNU/Linux-compatible USB3 gigabit ethernet adapter, the LevelOne USB-0401 is a nice little box that does exactly what it’s told, without requiring any proprietary software to do so, as the ax88179_178a driver is GPL’d.

kevin@vanadium:~$ sudo modinfo ax88179_178a
filename:       /lib/modules/3.14-2-amd64/kernel/drivers/net/usb/ax88179_178a.ko
license:        GPL
description:    ASIX AX88179/178A based USB 3.0/2.0 Gigabit Ethernet Devices
alias:          usb:v17EFp304Bd*dc*dsc*dp*ic*isc*ip*in*
alias:          usb:v04E8pA100d*dc*dsc*dp*ic*isc*ip*in*
alias:          usb:v0DF6p0072d*dc*dsc*dp*ic*isc*ip*in*
alias:          usb:v2001p4A00d*dc*dsc*dp*ic*isc*ip*in*
alias:          usb:v0B95p178Ad*dc*dsc*dp*ic*isc*ip*in*
alias:          usb:v0B95p1790d*dc*dsc*dp*ic*isc*ip*in*
depends:        usbnet,usbcore,mii
intree:         Y
vermagic:       3.14-2-amd64 SMP mod_unload modversions

TTIP & CETA: a few reasons for free software advocates to get angry

Thinking out loud » English | 12:19, Monday, 11 August 2014

In the last months of my internship at FSFE, I started following debates around TTIP (the Transatlantic Trade and Investment Partnership). TTIP is a huge “deep and comprehensive” free trade agreement between the US and the European Union, which started to be negotiated in July 2013. Well prepared by the great ACTA scandal of 2012 and by the November 2013 leak of the Trans Pacific Partnership (TPP)’s Intellectual Property chapter (thanks Wikileaks), we immediately looked for the impact of TTIP on software patents, copyright enforcement and DRM (digital rights management) circumvention. After a few months working full time on the topic, I now believe that the problem is much deeper that any of these specific issues.

Policy Laundering: democracy isn’t competitive

This week Tyler Snell wrote an outstanding article about the “deep and comprehensive” trade agreements. He develops the useful concept of policy laundering. Here is what he says:

A growing pernicious trend that is greatly affecting digital policy around the world is called “policy laundering” – the use of secretive international trade agreements to pressure countries to commit to restrictive or overly broad laws that would not ordinarily pass a transparent, democratic process.

Not only is the behind-closed-doors procedure questionable, many of the representatives negotiating such agreements are not elected representatives but rather trade appointees and powerful multinational corporate lobbyists. Policy laundering deprives each jurisdiction, and most important their citizens, the chance to engage in a legitimate legislative debate.

Provisions which failed to pass democratic elected parliaments, brought back to life through international treaties, it sounds much too familiar. A few reasons why the trick does work:

  • The agreements are negotiated for years, in secret.

1) Access to a limited number of negotiating documents is only granted to the few MPs member of trade committees (INTA in the case of the European Parliament). This access is made via “secure reading rooms”. They can take no notes, bring no devices. They cannot come with staff members specialized in Trade Policy.  Seeing the complexity of trade deals’ texts (often over 1000 pages of legalese) and the fact that the devil is always in the details, they are highlight unlikely to manage anything useful with this kind of “access”.

2) Even if they do spot something harmful in the negotiating text, no elected representative sits at the negotiating table. They have very little leverage to influence the negotiating process whatsoever.

3) Secrecy around the negotiations make campaigns very difficult. Activists are often forced to fight without knowing what they fight, until it is too late (building a campaign takes time). They are either restraining themselves in order to not say anything untrue, or basing their work on guesses. It is then easy for the deal’s promoters to dismiss the critics as “lies” or “misleading“. Reversing the burden of the proof this way is highly undemocratic in itself.

Fortunately, there are still great people working in the institutions. Documents are leaked surprisingly often, enabling us to criticize secrecy AND to know what we are talking about when discussing the content. Thank you sources.


  •  The ratification process is hardly democratic

After years of negotiation, the deals finally arrive in front of parliaments for ratification. MPs are then supposed to say a simple “yes” or “no” to a thousand pages of text, likely to include good and terrible things. In case of bad wording, loopholes, blurry definitions, harmful provisions, MPs cannot amend but have to reject the whole agreement. The pressure to ratify, after “so much effort put into the negotiating process”, is enormous and trade deals are therefore usually overwhelmingly ratified.

Fortunately, sometimes things do not go as planned, like in the ACTA case.


  •  The deals can be put in application before complete ratification

In Europe, trade and investment are EU competences. However, “deep and comprehensive agreements” deal with more, including sectors that are still competences of the Member States. After ratification by the European Parliament, such deals must therefore also be ratified by national parliaments. Because democratic processes take too long, they can enter into force provisionally, which means before ratification – the vote of elected representatives. The provisional entry into force exerts pressure on national parliaments to vote in favor of the agreement. When ratification might be difficult, or event rejected in a country or region (for federal states), the ratification is not put on the Parliaments agenda and the provisional implementation stays in force.

Worst, states who did not ratified a trade or investment agreement, but have it in force because of the provisional application can be attacked via dispute settlement mechanisms included in the deals.

Another great article recently analyzed the $50.02 billion (yes, yes) Yukos ISDS case (Investor-State Dispute Settlement) under the Energy Charter Treaty. Independently from the rest of the story, the author notes that:

[it] needs to be emphasized here that Russia only accepted the provisional application of the Energy Charter Treaty (pending ratification) in 1994 meaning that the country will only apply the Treaty “to the extent that such provisional application is not inconsistent with its constitution, law or regulations.” Same was the approach adopted by Belarus, Iceland, Norway and Australia.

Russia never ratified the ECT and announced its decision to not become a Contracting Party to it on August 20, 2009. As per the procedures laid down in the Treaty, Russia officially withdrew from the ECT with effect from October 19, 2009.

Nevertheless, Russia is bound by its commitments under the ECT till October 19, 2029 because of Article 45 (3) (b) which states that “In the event that a signatory terminates provisional application…any Investments made in its Area during such provisional application by Investors of other signatories shall nevertheless remain in effect with respect to those Investments for twenty years following the effective date of termination.”

I strongly encourage you to read the whole analysis. What it means is that a government can sign any treaty, with far reaching consequences for its population, economy, environment and ability to legislate, and the state can face the all the consequences of the agreement without ever having asked the parliaments’ approval.


Why does it all mean for free software

Free software is important, but it is only one of many crucial policy issues. Trade deals like TTIP or CETA can have an impact on free software – and I will describe concretely how in a next post -,  like on everything else you care about, be it pesticides control, financial regulation or animal welfare.

More importantly, they modify policy making as a whole, making it less transparent, less democratic and harder to reform for the best.

However, ff enough pressure is put on members of the European Parliament, 2014 might see the second strong rejection of a dangerous secret deal, after ACTA. CETA ( Comprehensive Economic and Trade Agreement), the EU-Canada deep and comprehensive free trade agreement, was concluded last week. It will be initiated in September and its ratification process in the European Parliament will start this Autumn. A good moment to send a strong message to the European Commission and to our governments: policy laundering is not a legitimate way to legislate, and should never be.

Sunday, 10 August 2014

Moscow’s Crypto InstallFest

stargrave's blog | 09:20, Sunday, 10 August 2014

There was 3-in-1 event in Moscow a week ago: PGP keysigning party, cryptoparty and installfest, called Crypto InstallFest, organized by Russian’s Pirate Party. Several dozens of people have come. There were various speeches and lectures, video link and chatting with Runa Sandvik (she is involved in and workshops at last. Someone was helped with Ubuntu installing, someone with PGP (GnuPG of course). Also there were many discussions about cryptocurrencies and Bitcoin (I can not call it cryptocurrency). There are some dicussions and photographs in social networks: Vkontakte, Facebook.

Friday, 08 August 2014

The Jamendo experiment – “week” 1

Hook’s Humble Homepage | 22:00, Friday, 08 August 2014

As forecast in a previous blog post, this is the first "weekly" report from my Jamendo experiment. In the first part I will talk a bit about the player that I use (Amarok), after that will be a short report on where I get my music fix now and how it fares and in the end I will introduce some artists and albums that I found on Jamendo and like.

Amarok 2.0.2 sadly has a bug that makes it lack some Jamendo albums. This makes searching and playing Jamendo albums directly from Amarok a bit less then perfect and forces me to still use Firefox (and Adobe Flash) to browse music on Jamendo. Otherwise Amarok with its version 2.x has become an amazing application or even platform, if you will, not only for playing and organising, but also for discovering new music. You can even mix in the same playlist your local collection with tracks from web services and even streams.

Most of the music I got directly from Jamendo, a bit less I listened online from Magnatune and the rest was streams from Last.FM (mostly from my recommendations). As far as music on Jamendo and Magnatune – both offer almost exclusively CC licensed music – I honestly found it equally as good, if not better, then what conservative record labels and stations offer. This could in part be because of my music taste, but even so, I am rather picky with music. As far as the quality of the sound is concerned, being able to download music in Ogg/Vorbis (quality 7) made me smile and my ears as well. If only I had a better set of headphones!

Now here's the list of artists that I absolutely must share:

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Jimmy the Hideous Penguin – Jimmy Penguin is by far my absolute favorite artist right now! His experimental scratching style over piano music is just godly to my ears – the disrhythmia that his scratching brings over the standard hip hop beats, piano and/or electronica is just genius! The first album that made me fall in love was Jimmy Penguin's New Ideas – it starts with six tracks called ff1 to ff6 with already the first one (ff1) showing a nice melange of broken sampling layered with a melody and even over that lies some well placed scratching. The whole album is amazing! From the previously mentioned ff* tracks, I would especially like to put into the limelight apart from ff1, then also ff3 and ff4. The ff6 (A Long Way to Go) and Polish Jazz Thing bare some jazz elements as well, while Fucking ABBA feels like flirting with R&B/UK garage. On the other hand the album Split Decisions has more electronic elements in it and feels a bit more meditative, if you will. The last of his albums that I looked at was Summer Time, which I have not listened to thoroughly enough, but so far I like it a lot and it's nice to see Jimmy Penguin take on even more styles, as the track Jimmy Didn't Name It has some unmistakable Asian influences.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

No Hair on Head – very enjoyable lounge/chillout electronica. Walking on Light is the artist's first album and is a collection of some his tracks that he made in the past 5 years. It's great to see that outside mainstream artists are still trying to make albums that make sense – consistent style, but still diverse enough – and this album is just such. The first track Please! is not a bad start into the album, Inducio is also a nice lively track, but I what I think could be hits are the tracks Anywhere You Want and Fiesta en Bogotá – the first one starts rather standard, but then develops into a very nice pop-ish, almost house-like summery electronic song with tongue-in-cheek lyrics; the latter features an accordion and to me feels somehow like driving through Provence or Karst (although Bogotá lies actually in Columbia).

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Electronoid – great breakbeat! If you like Daft Punk's album Homework or less popular tracks by the Chemical Brothers, you will most probably enjoy Electronoid (album) as well.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Morning Boy— great mix of post punk with pop-ish elements. On their album For us, the drifters. For them, the Bench, the song Maryland reminds me of Dinosaur Jr., while Whatever reminds me of Joan of Arc with added pop. Although All Your Sorrows is probably the track I like best so far – it just bursts with positive attitude while still being somewhat mellow.

Bilk (archived) – a fast German pop punk with female vocals that limits on the Neue Deutsche Welle music movement from the 80's. Their album Ich will hier raus (archived) is not bad and might even compare to more known contemporary artists like Wir sind Helden. Update: Sadly they removed themselves from Jamendo, they have their own website now, but unfortunately there is no licensing info available about the music.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Ben Othman – so far I have listened to two of his albums – namely Lounge Café Tunis "Intellectuel" and Lounge Café Tunis "Sahria" – they consist of good lounge/chillout music with at times very present Arabic influences.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Silence – this seems like a very popular artist, but so far I only managed to skim through the album L'autre endroit. It seems like a decent mix of trip-hop with occasional electric guitars and other instruments. Sometimes it bares elements of IDM and/or dark or industrial influences. I feel it is too early for me to judge if it conforms my taste, but it looks like an artist to keep an eye on.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Project Divinity – enjoyable, very calm ambiental new age music. The mellowness and openness of the album Divinity is very easy to the ears and cannot be anything else then calming.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

SoLaRis – decent goatrance, sometimes wading even into the dark psytrance waters.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src=";layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Team9 – after listening to some of their tracks on Jamendo, I decided to download their full album We Don't Disco (for free, under CC-BY-SA license) from their (archived) homepage. Team9 is more known for their inventive remixes of better known artists' songs, but their own work at least equally as amazing! They describe themselves as "melodic, ambient and twisted" and compare themselves to "Vangelis and Jean Michel Jarre taking Royksopp and Fad Gadget out the back of the kebab shop for a smoke" – both descriptions suit them very well. The whole album is great, maybe the title track We Don't Disco Like We Used To and the track _Aesthetic Athletics _stand out a bit more because they feel a bit more oldskool and disco-ish then the rest of them, but quality-wise the rest of the tracks is just as amazing!

As you can see, listening only to free (as in speech, not only as in beer) music is not only possible, but quite enjoyable! There is a real alternative out there! Tons of great artists out there are just waiting to be listened to – that ultimately is what music is all about!

hook out → going to bed…

Help needed reviewing Ganglia GSoC changes - fsfe | 20:14, Friday, 08 August 2014

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments
Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format.
Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you.
Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD.
Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters
Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Wednesday, 06 August 2014

How to write your Pelican-powered blog using ownCloud and WebDAV

Hook’s Humble Homepage | 22:00, Wednesday, 06 August 2014

Originally this HowTo was part of my last post – a lengthy piece about how I migrated my blog to Pelican. As this specific modification might be more interesting than reading the whole thing, I decided to fork and extend it.

What and why?

What I was trying to do is to be able to add, edit and delete content from Pelican from anywhere, so whenever inspiration strikes I can simply take out my phone or open up a web browser and create a rough draft. Basically a make-shift mobile and desktop blogging app.

I decided to that the easiest this to do this by accessing my content via WebDAV via ownCloud that runs on the same server.

Why not Git and hooks?

The answer is quite simple: because I do not need it and it adds another layer of complication.

I know many use Git and its hooks to keep track of changes as well as for backups and for pushing from remote machines onto the server. And that is a very fine way of running it, especially if there are several users committing to it.

But for the following reasons, I do not need it:

  • I already include this page with its MarkDown sources, settings and the HTML output in my standard RSnapshot backup scheme of this server, so no need for that;
  • I want to sometimes draft my posts on my mobile and Git and Vim on a touch-screen are just annoying to use;
  • this is a personal blog, so the distributed VCS side of Git is just an overhead really;
  • there is no added benefit to sharing the MarkDown sources on-line, if all the HTML sources are public anyway.

Setting up the server

Pairing up Pelican and ownCloud

In ownCloud it is very easy to mount external storage, and a folder local to the server is still considered “extrenal” as it is outside of ownCloud. Needless to say, there is a nice GUI for that.

Once you open up the Admin page in ownCloud, you will see the External Storage settings. For security reasons only admins can mount a local folder, so if you aren’t one, you will not see Local as an option and you will have to ask your friendly ownCloud sysAdmin to add the folder from his Admin page for you.

If that is not an option, on a GNU/Linux server there is an easy, yet hackish solution as well: just link Pelican’s content folder into your ownCloud user’s file system – e.g:

ln -s /var/www/ /var/www/owncloud/htdocs/data/hook/files/Blog

In order to have the files writeable over WebDAV, they need to have write permission from the user that PHP and web-server are running under – e.g.:

chown -R nginx:nginx /var/www/owncloud/htdocs/data/hook/files/Blog/

Automating page generation and ownership

To have pages constantly automatically generated, there is a option to call pelican --autoreload and I did consider turning it into an init script, but decided against it for two reasons:

  • it consumes too much CPU power just to check for changes;
  • as on my poor ARM server a full (re-)generation of this blog takes about 6 minutes2, I did not want to hammer my system for every time I save a minor change.

What I did instead was to create an fcronjob to (re-)generate the website every night at 3 in the morning (and send a mail to root’s default address), under the condition that there blog posts have either been changed in content or added since yesterday:

%nightly,mail * 3 cd /var/www/ && posts=(content/**/*.markdown(Nm-1)); if (( $#posts )) LC_ALL="en_GB.utf8" make html

Update: the above command is changed to use Zsh; for the old sh version, use:

%nightly,mail * 3 cd /var/www/ && [[ `find content -iname "*.markdown" -mtime -1` != "" ]] && LC_ALL="en_GB.utf8" make html

In order to have the file permissions on the content directory always correct for ownCloud (see above), I changed the Makefile a bit. The relevant changes can be seen below:

    chown -R nginx:nginx $(INPUTDIR)

    [ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)

    chown -R nginx:nginx $(INPUTDIR)

E-mail draft reminder

Not directly relevant, but still useful.

In order not to forget any drafts unattended, I have also set up an FCron job to send me an e-mail with a list of all unfinished drafts to my private address.

It is a very easy hack really, but I find it quite useful to keep track of things – find the said fcronjob below:

%midweekly,mailto( * * cd /var/www/ && ack "Status: draft"

Client software


As a mobile client I plan to use ownNotes, because it runs on my Nokia N91 and supports MarkDown highlighting out-of-the-box.

All I needed to do in ownNotes is to provide it with my ownCloud log-in credentials and state Blog as the "Remote Folder Name" in the preferences.

But before I can really make use of ownNotes, I have to wait for it to starts using properly managing file-name extensions.

ownCloud web interface

Since ownCloud includes a webGUI text editor with MarkDown highlighting out of the box, I sometimes use that as well.

An added bonus is that the Activity feed of ownCloud keeps a log of when which file changed or was added.

It does not seem possible yet to collaboratively edit files other than ODT in ownCloud’s webGUI, but I imagine that might be the case in the future.

Kate via WebDAV

In many other desktop environments it is child’s play to add a WebDAV remote folder — just adding a link to the file manager should be enough, e.g.: webdavs://

KDE’s Dolphin makes it easier for you, because all you have to do is select RemoteAdd remote folder and if you already have a connection to your ownCloud with some other service (e.g. Zanshin and KOrganizer for WebCal), it will suggest all the details to you, if you choose Recent connection.

Once you have the remote folder added, you can use it transparently all over KDE. So when you open up Kate, you can simply navigate the remote WebDAV folders, open up the files, edit and save them as if they were local files. It really is as easy as that! ☺

Note: I probably could have also used the more efficient KIO FISH, but I have not bothered with setting up a more complex permission set-up for such a small task. For security reasons it is not possible to log in via SSH using the same user the web server runs under.

SSH and Vim

Of course, it is also possible to ssh to the web server, su to the correct user, edit the files with Vim and let FCron and Make file make sure the ownership is done appropriately.

hook out → back to studying Arbitration law

  1. Yes, I am well aware you can run Vim and Git on MeeGo Harmattan and I do use it. But Vim on a touch-screen keyboard is not very fun to use for brainstorming. 

  2. At the time of writing this blog includes 343 articles and 2 pages, which took Pelican 440 seconds to generate on my poor little ARM server (on a normal load). 

A quick overview of the Free Software I’m using

the_unconventional's blog » English | 17:04, Wednesday, 06 August 2014

Because I’ve met quite a lot of newcomers to the Free Software world lately, I thought it might be a good idea to write an overview of the Free Software applications I’m using, and to which end.

Also, writing a piece like this is a great excercise to improve my Dvorak typing skills. :)

Operating system

I’m using the amd64-version of Debian jessie. I started off with a minimal expert install, and then pulled in the packages I wanted using a custom-tailored Bash script.

Desktop environment

I’m currently using GNOME Shell 3.12 pretty much stock. The only changes I’ve made are the Faience icon theme and three shell extensions: AlternateTab, Impatience, and Topicons.

The basic desktop environment tools are even more boring. I use Nautilus as a file manager, File Roller as an archive manager, gedit as a plain text editor, Eye of GNOME as an image viewer, and some other core GNOME utilities such as GNOME Calculator, GNOME Character Map, GNOME Disk Usage Analyzer, GNOME System Monitor, GNOME Terminal, and Seahorse.

My only real problem with Nautilus is that it lacks a bulk renamer tool, so I use pyRenamer for that.

Web browsers

My main browser is Iceweasel, which is Debian’s re-branded version of Firefox. Aside from the artwork and some missing user tracking tools, Firefox and Iceweasel are identical.

I only use one Iceweasel extension: AdBlock Edge.

I also have Chromium installed, which is Google Chrome without its proprietary parts. (Pepper Flash, PDF reader, and user tracking.)

I use two Chromium extensions: AdBlock and PDF.js.

E-mail, address book, calendar, and instant messaging

I’m using Evolution as my groupware suite in conjunction with my own IMAP/SMTP server and my self-hosted ownCloud server.

For instant messaging, I use Gajim in conjunction with my own XMPP/Jabber server, as well as the FSFE’s chat service. I use both GPG and OTR to encrypt my messages.

Gajim only supports Jabber/XMPP, so if you want to use other protocols, Pidgin or Empathy might be better suited, although the latter lacks any kind of encryption support other than TLS to the server.

For encrypted phone calls, I use Jitsi, also with my own XMPP server.
Jitsi seems to be the only VoIP client supporting ZRTP, which is the main reason I’m using it. Aside from that, it’s a buggy, bloated Java application that does not integrate at all with any DE, and its configuration files and debug output are only appealing to masochists.

Office suite

For documents, spreadsheets, and slideshows, I use LibreOffice Writer, Calc, and Impress respectively. I also have LibreOffice Draw and Math installed, although I rarely use those applications.

In order to read PDF documents, I’m using Evince with MuPDF as a backup in case Evince has trouble rendering a document properly. (Although that hasn’t happened in years.)

In order to view eBook formats such as ePUB, I use Calibre, but I have to note that I strongly dislike the entire concept of electronic books; especially when they contain DRM, which can take away the elementary human right to educate yourself.

Audio and Video

I use Aqualung to manage and play my tens of thousands of ripped tracks. It may not be the most user-friendly application, but its gapless playback is unmatched, as well as the ease to use LADSPA plugins.

If you’re looking for a more basic music player with internet radio capabilities and Podcast support, you’d probably be better off with Rhythmbox. If you’re looking for a no-nonsense music player, have a look at Audacious.

For any kind of video playback, including watching live TV, I use VLC.

Sometimes I record some of the bass lines I play or do some ameteur remastering to poorly-recorded audio tracks (usually bootlegs). I use Audacity for that.

I rarely work with video materials, but when I do, I use OpenShot, which is a pretty basic but stable non-linear video editor. Other options include Pitivi and Kdenlive.

Graphics and photography

For raster image editing, I use GIMP, and for vector image editing, I use Inkscape. For simple drawing/editing I can also recommend Pinta, although I personally don’t have it installed because I seriously hate C#/Mono.

For photo editing, I use Darktable, and for designing printed media (such as flyers), I use Scribus.

For 3D modeling, I use Blender. But then again, who am I kidding.. I hardly ever do anything 3D.

Optical media

Although most people barely use them any more, as a CD collector, I often work with optical disks.

I use Asunder to rip CDs, HandBrake to rip DVDs, and Brasero to burn them both.

In order to edit tags and metadata of my ripped CDs, I use EasyTAG.

Maps and GPX

To avoid cycling into the abyss, I have a Garmin eTrex GPS device, which I use with OpenFietsMap maps. In order to create and analyze GPS tracks, waypoints and routes, I use QLandkarte GT.

I also contribute to OpenStreetMap, but I only use a browser for that nowadays.

Decompressing RAR

As much as I dislike the proprietary RAR algoritm, sometimes you really have to open such a file. There even is a “kind of open source” unrar application for GNU/Linux, but it is not Free Software, as the license forbids one to use the code in any field of endeavour, which is unacceptable for me.

There is a way to have RAR support with a truly Free Software solution though: The Unarchiver. Even though it’s primarily a Mac application, it runs fine on GNUstep and integrates seamlessly into File Roller.

Network analysis

In order to keep an eye on my LAN, I’m using a bunch of tools to check the data transmissions. The most familiar would be Wireshark, but I also regularly fire up iftop and nethogs.


Fonts are easily overlooked, but the availability of fonts – especially those that are metrically compatible to “popular” proprietary fonts – can make a big difference in terms of interoperability. I have the following fonts installed:

100% Free Software

In case you were wondering; yes, my SSD contains nothing but Free Software. I don’t even have the contrib and non-free repositories enabled. There’s even a program to list all the proprietary packages you have installed: vrms.

Monday, 04 August 2014

Off-The-Record (OTR) Messaging

Bela's Internship Blog | 15:58, Monday, 04 August 2014

Table of Contents:

  1. Introduction
  2. What is OTR?
  3. How does OTR solve the issues of GnuPG?
  4. Installation Example of OTR messaging in Pidgin running on Debian
  5. Some disadvantages of OTR
  6. Available distros, applications, and plugins

1. Introduction:

The purpose of this post is to clarify the meaning and technicalities of ‘off-the-record’ (OTR) messaging and to give insight into the possibilities of implementing it in various devices and Eco-systems. Furthermore some evaluating remarks on the advantages and drawbacks of OTR in comparison to GnuPG will be added.

2. What is OTR?

2.1 “Off the record”

OTR messaging is derived from the original concept of a private “off-the-record” conversation. Two people meet in private and have a conversation, without any observers or recording devices. The content of the information exchanged is merely known by the two parties involved in the conversation, so that the only record kept is their own ‘mental record’. In terms of technology, such a level of privacy is translated in two phenomena: ‘perfect forward secrecy’ (PFS) and repudiability.

2.1 Perfect Forward Secrecy

Wikipedia defines the former term as “a property of key-agreement protocols ensuring that a session key derived from a set of long-term keys cannot be compromised if one of the long-term keys is compromised in the future”, and states that the “key used to protect transmission of data must not be used to derive any additional keys, and if the key used to protect transmission of data is derived from some other keying material, then that material must not be used to derive any more keys”. Thus, the “compromise of a single key permits access only to data protected by that single key” [1].

2.3 Repudiability / Deniable Authentication

Repudiability (or: deniable authentication) refers “to authentication between a set of participants where the participants themselves can be confident in the authenticity of the messages, but it cannot be proved to a third party after the event” [2], thus creating a state of secrecy that allows two participants in a conversation to identify each other, but makes it impossible for a them or a third party to do so after the conversation.

2.4 OTR vs. GnuPG

In a paper [3] written by two Berkeley scientists and the creator of the most widely used OTR software published in “Workshop on Privacy in the Electronic Society“, OTR is presented and compared against GnuPG. While the authors agree that GnuPG encrypted conversations appear to be a secure way of communicating, they criticize that an evesdropping third party could theoretically store all the encrypted conversations and attempt to obtain the private key of one of the participants in the conversation at some point in the future. If the third party were successful, it not only could read all past and future communication but also reveal the signatures featured within the messages, thus giving way to the identities of the conversation partner’s identity. Thus, according to the authors, GnuPG suffers from the lack of PFS due to the longevity of its keys.

3. How does OTR solve the issues of GnuPG?

3.1 Some Remarks

First of all it should be noted that, while OTR solves some of the issues of GnuPG, it cannot replace it in the email context due to its technical characteristics. OTR is largely a technology made for the area of instant messaging, so that the following comments are aimed exclusively at this domain.

3.2 PFS through short-lived keys

In terms of PFS, OTR uses short-lived keys that are discarded after use. To do so, it uses the Diffie-Hellman key agreement protocol, which (without diving into too much mathematical detail) essentially allows the transmission of a maintained secret through the public due to each participating party choosing a variable that is then used to en- and decrypt the secret.

3.2 Repudiability by lack of private keys

In terms of the issue of identification, OTR wants to avoid another pitfall of GnuPG, which is the public key each individual spreads and attempts to keep alive as long as possible. Through the process known as key-signing, users of GnuPG obtain increasing credibility of identity through increasing numbers of key signatures. The downside of this is, that the signatures of messages can later be used to prove the identity of a certain individual, despite whether the person does consent with it or not. Put differently, one might not want to be connected to a secret one has sent at some point in the past, even if the recipient of the secret is or has been trustworthy at the time.

3.3 Identification despite lack of private keys

Thus, the issue is that the sender needs to be able to prove one’s identity to the recipient without the recipient or a third party later being able to do so. This is achieved through message authentication code (MAC) keys, which are calculated based on a secret mac key that is exchanged secretly, using a method similar to the Diffie-Hellman principle. This means that only the individual message can be assigned to the sender, but no identity is revealed.

3.4 Deniability through forgeability

To go even further, OTR messaging applies the principle of forgeability through malleable encryption. The principle of forgeability is to create encryption in such a way that it is very “easy to alter the ciphertext in such a way as to make meaningful changes in the plaintext, even when you don’t know the key”. To give an example for this, imagine the following scenario: you have sent a message with the content “I want to meet you tomorrow to give you flowers”. An FBI agent has intercepted your message. The fact that the message is malleable means that some slight changes in the encrypted data could easily result in the message saying “I want to blow up the trainstation and kill myself”. In other words, since the message is encrypted in such a way that it is easy to make meaningful changes to its content by simple manipulations of the code, arguing for any meaning within the decrypted message is obsolete. Put even more simple, even if the content of the message was decrypted, an argument can be made that the content could have easily be changed by a third party.

In more technical terms: OTR “encrypts the plaintext by masking it with a keystream using the exclusive-OR operation; to decrypt, the same exclusive-OR is used to remove the keystream and reveal the plaintext. This encryption is malleable, as a change to any bit in the ciphertext will correspond to a change in the corresponding bit in the plaintext”. If an interceptor “can guess the plaintext of a message, she can then change the ciphertext to decrypt to any other message of the same length, without knowing the key. Therefore, a message encrypted with a stream cipher does not prove integrity or authenticity in any way”. The participants in the conversations “can still use a MAC to prove [...] that [their] messages are indeed [theirs]” [3].

4. Installation example for Pidgin running on Debian

In this example, the process of enabling OTR for pidgin on Linux is shown.

1. First of all, we need to install pidgin. This is achieved by using the standard package management tool or the “Add/Remove Software” tool.

2. Once Pidgin is installed, the following command needs to be run in the terminal in order to install the OTR plugin:

3. Once the installation of the OTR plugin has succeeded, OTR messaging can be enabled in the pidgin settings (“Tools/Plugins” in the top menu bar):

4. The last step is configuring a fingerprint:

5. Now OTR for Pidgin is set up and running, and can be used enabled in the messaging window – provided the communication partner has it enabled as well:

6. When both parties have OTR enabled, there still is the process of identification that has to take place. The options to do so are “Question and Answer, Shared Secret, or Manual Fingerprint Verification”:

7. Finally, OTR is enabled and private messaging can commence!

Note that the above procedure is can be applied for a range of messaging services including Facebook chat (see bellw), ICQ, AIM, irc, Jabber, MSN messenger, etc.!

5. Some disadvantages of OTR:

With the introduction to OTR given and the installation procedure explained, it is time to critically reflect on the technology from a user standpoint. Please note that some of the remarks below are valid for several types of encryption services an not merely for OTR.

The first and most obvious issue is one that is shared by other encryption procedures as well: it only works if both parties use it. Since this is a immanent issue of message encryption services, the question that remains is how difficult it is to use in a real-life scenario. Whereas for instance Apple’s iMessage encryption is implemented into the device and operates ‘invisibly’ to the user, the use of OTR requires some significant effort and some degree of technical expertise. In a related note, OTR isn’t supported by many of the most commonly used messaging services such as Whatsapp or Skype. This means that the user in question would need to make a switch to a messaging platform that supports OTR first.

The mere process of grasping the encryption principles of OTR, which is arguably a necessary step to convince someone to begin using it over another channel of communication, does take some time and effort. This Thirdly, platform fragmentation, especially in the mobile segment, does pose additional challenges to the adoption of OTR. For instance, the process of fingerprint validation between iOS and Android devices needs to be optimized.

Moreover, the OTR level of security comes at a price: messages are typically inaccessible once the connection has been closed, which may be unappealing to some users who are used to being able to browse their message history. Obviously this is more of an issue of education rather than one of features, but it needs to be accepted as such.

On the side of the level of privacy given by OTR, an important thing to note is that the higher level IM protocol can give your identity away depending on how you use it. Since this is a question of implementation, it should be taken with a grain of salt. Nonetheless, as Mathew Green put it [4]: “In fact [OTR is] such a good idea that if you really care about secrecy it’s probably one of the best options out there.”

6. Available distros, applications, and plugins:

OS Distributions which ship with OTR software

IM clients which support Off-the-Record Messaging “out of the box”

Third-party plugins for IM clients

Libraries that support OTR

* Software developed by the OTR Development Team







Free Software in Education News – July

Being Fellow #952 of FSFE » English | 00:06, Monday, 04 August 2014

Here’s what we collected in July. If you come accross anything that might be worth mentioning in this series, please drop me a note, dump it in this pad or drop it on the edu-eu mailinglist!

FSFE Edu-Team activities

  • As in the previous months, we worked on the edu pages


Edu software

Other news

Future events

Thanks to all contributors!

flattr this!

Friday, 01 August 2014

Photo of the Month — 2014-08

emergency exit | 04:00, Friday, 01 August 2014

Originally I had wanted to post a different picture, but sadly the war against the Palestinian people has seen yet another level of escalation, so here’s a picture in solidarity with the Palestinians and all other people suffering from war and oppression. I shot it in Ramallah, Westbank, 2011.

Many things have been said about the situation and I don’t want to engage in political debate on this blog, so I will not say more than the solidarity expressed above. If you haven’t yet thought about the whole conflict, here’s a good video by the Jewish Voice for Peace. If you agree, support groups and actions in your area.

Concerning the technical background of the picture, this is the original shot taken, walking down a street, without stopping to get the angle right and better exposure:

It was edited with Darktable using base curve and color curve corrections (weak sigmoidal curves for both), local contrast enhancement, lense correction, cropping and angle correction. The latter is not to my full satisfaction, but the best I could get from the picture, I think.

Thursday, 31 July 2014

Donating to Brown Dogs and Barbers: Small update

Computer Floss | 09:01, Thursday, 31 July 2014

Some readers have recently reported to me that the PayPal donate button (which in theory should allow you to donate to the publication of my book Brown Dogs and Barbers) isn’t working.

I’m not certain yet, but I think PayPal have recently made changes that have stopped the button from working… I’m currently looking into getting it working again.

In the meantime, if you’re interested in using PayPal to donate to the production of my book, you can send contributions to my email address:

Thank you for your continued support.

Wednesday, 30 July 2014

Minimalist simple high performance secure VPN daemon

stargrave's blog | 07:00, Wednesday, 30 July 2014

Several days ago I decided to make an alternative for OpenVPN: GoVPN. OpenVPN uses rather slow HMAC for message authentication and no zero-knowledge password authenticated key exchanges. He is pretty simple, but with not so high security margin and performance.

I wrote already working (but of course with possibly many bugs) daemon on Go programming language. It uses one of the faster crypto algorithms available today and achieves zero-knowledge mutual pre-shared key authenticated key exchange. All derived keys are per-session, so even if PSK is compromised, there is no way to decrypt captured traffic (perfect forward secrecy property).

It does neither interface nor IP-address and routing management: it is the task of underlying OS facilities. And currently it can work with only single client. But I am planning to fix that: so it can be used with many clients simultaneously. Moreover secure remote password can be better choice to allow humans use memorable passwords instead of 256bit keys.

I think that the main comparative advantage is small code size, that can be easily analyzed, audited and fixed. From technical point of overview: it uses Salsa20, Poly1305, Curve25519 and DH-EKE with PSK.

Tuesday, 29 July 2014

Pruning Syslog entries from MongoDB - fsfe | 18:27, Tuesday, 29 July 2014

I previously announced the availability of rsyslog+MongoDB+LogAnalyzer in Debian wheezy-backports. This latest rsyslog with MongoDB storage support is also available for Ubuntu and Fedora users in one way or another.

Just one thing was missing: a flexible way to prune the database. LogAnalyzer provides a very basic pruning script that simply purges all records over a certain age. The script hasn't been adapted to work within the package layout. It is written in PHP, which may not be ideal for people who don't actually want LogAnalyzer on their Syslog/MongoDB host.

Now there is a convenient solution: I've just contributed a very trivial Python script for selectively pruning the records.

Thanks to Python syntax and the PyMongo client, it is extremely concise: in fact, here is the full script:


import syslog
import datetime
from pymongo import Connection

# It assumes we use the default database name 'logs' and collection 'syslog'
# in the rsyslog configuration.

with Connection() as client:
    db = client.logs
    table = db.syslog
    #print "Initial count: %d" % table.count()
    today =

    # remove ANY record older than 5 weeks except
    t = today - datetime.timedelta(weeks=5)
    table.remove({"time":{ "$lt": t }, "syslog_fac": { "$ne" : syslog.LOG_MAIL }})

    # remove any debug record older than 7 days
    t = today - datetime.timedelta(days=7)
    table.remove({"time":{ "$lt": t }, "syslog_sever": syslog.LOG_DEBUG})

    #print "Final count: %d" % table.count()

Just put it in /usr/local/bin and run it daily from cron.


Just adapt the table.remove statements as required. See the PyMongo tutorial for a very basic introduction to the query syntax and full details in the MongoDB query operator reference for creating more elaborate pruning rules.

Potential improvements

  • Indexing the columns used in the queries
  • Logging progress and stats to Syslog

LogAnalyzer using a database backend such as MongoDB is very easy to set up and much faster than working with text-based log files

Monday, 28 July 2014

Free Software and a pension fund

André on Free Software » English | 20:12, Monday, 28 July 2014

The pension fund I’m with requires non-free flash software on it’s website. Members face the choice: install flash or be uninformed.

In the Netherlands a lot of people are mandatory a member of a pension fund. You don’t have the freedom to choose which one.

My pension fund has on it’s website a module with my personal information. I need non-free flash-software to use it.

I’ve filled in a reaction form to the pension fund (to e-mail is not possible) and said that if you do not have non-free flash-software installed you cannot see anything after you log in. Months later, there still is no reaction from the pension fund.

“That’s why in 2014, we will make our website even better” – Pension fund in it’s yearly leaflet, as translated by me

Am I the only one who is going through this experience?

Welcome on my blog

André on Free Software » English | 17:00, Monday, 28 July 2014

Hello. My name is André and I’m from the Netherlands. I translate for the Free Software Foundation Europe. Welcome on my blog.

Intro to Open Invention Network’s defensive publications

Hugo - FSFE planet | 10:37, Monday, 28 July 2014

Three weeks ago, I started working for Open Invention Network as an intern1. Open Invention Network, or OIN in short, aims at creating a safe environment for Linux and Linux-based systems to thrive in spite of all the threats that patents constitute to software developers.

<aside class="dyk-patents2009 sidenote right"> Did you know? In 2009-2011 in the US, $20 billion was spent on patent litigation and patent purchases. In 2011, for Apple and Google, this spending exceeded spending on research and development. (source) </aside>

As one of my activities with OIN in the Linux Defenders program, I am helping Free Software (aka Open Source) projects submit “defensive publications.”

Defensive publications are sort of anti-patents:

  • while patents are claimed to exclude others from being able to implement something,
  • defensive publications prevent anyone to exclude others from being able to implement something.

They’re called defensive because they can be used against further patent applications or they can be used a posteriori to defend oneself against patent infringement claims. Indeed, if the software is already accessible by the public before a patent on it is submitted, there’s no way you or anyone would be infringing on a patent on that software. Actually in that situation the patent should be invalidated. Then you might ask: why do I need to write defensive publications if I have already published my source code? — Unfortunately, that’s because just releasing source code is not effective to protect yourself against patents.

In theory, it is true that you are immune from infringement of subsequent patents as soon as you’ve made your software source code publicly accessible online, for instance using a public version control system like Github.

In practice, it’s not really effective. Here’s why:

  1. the life of patents begin at the patent office where patent applications are submitted, then reviewed by patent office staff:

    Patent examiners have a strong sense of the technology that is patented, but they’re missing an understanding of what has been and is currently being developed in the open source world. As shocking as it may seem, the result is the examiner formulating an inaccurate sense of what is innovative. As the final arbiter of a very significant monopoly grant, they are often grossly uninformed in terms of what lies beyond their narrowly scoped search. This is not wholly their fault as they have limited resources and time. However, it is a strong indication of a faulty system that is so entrenched in the archaic methods under which patent offices have been operating.

    As Andrea pointed out, patent office staff will usually not go to software repositories and read source code in order to find prior art. That’s why making it easy for them to read about what you’ve done in software is necessary. That’s what defensive publications are supposed to do.

  2. The life of patents end in several ways, whichever comes first:

    1. The patent was filed more than 20 years ago or the patent holders have not paid their yearly patent-taxes, it’s now in the public domain
    2. an authoritative court decision has striked out the patent as invalid (and there’s no appeal pending)
    3. the patent office reverts their decision to grant the patent

    The problem is that in each of these cases, the process can be quite long. Litigations can go on for several years, especially since a patent holder will probably try to appeal a decision that invalidate its patent.

    As for the patent office procedures, they can take a decade. For instance, it took more than 15 years to strike down a single very broad Amazon patent application2.

    Meanwhile, the patent will constitute a potential threat that will effectively encumber the use and distribution of your software.

Basically, defensive publications consist in documenting one aspect of software projects that’s focused on solving a challenge and does it in a new, innovative way. The document would give some context about the state of the art and then describe in more details how the system works, usually by using meaningful diagrams, flowcharts and other figures.

Not like this one:

Created by Libby Levi for
Parody of a software patent figure

And who’s going to read defensive publications? At OIN, we maintain a website to list defensive publications. Then, we submit them to databases used for prior-art examination by patent office examiners. So the target audience for these defensive publications is the patent office that reviews patent applications. A good defensive publication should use generic terms that are understood even by someone who’s not programming in the same language as the one used for the program.

Defensive publications may be no more than a re-arrangement of what’s already written on the project’s blog, or in the documentation. They can be useful to explain how your program works to other programmers. In some aspect, they look like a (short!) scientific publication.

For software that works in areas heavily encumbered with patents like media codecs, actively submitting defensive publications can safeguard the project’s rights against patent holders. For instance, consider that patent trolls now account for 67% of all new patent lawsuits and as shown in a 2012 study, startups are not immune to patent threats.

So part of my job is to work with Free Software projects to help them submit defensive publications. I have been working with Pablo Joubert on a defensive publication around search engines making use of distributed hash tables (DHT). Pablo was involved in the Seeks project and has now started a new project building upon seeks. It was very interesting for me to learn more about how DHT are used in peer-to-peer networks and how we can make use of them for new awesome applications like social search. Now, Pablo also has a document that explains concisely what the project is and how it works. This could be the preamble to the documentation 😉

I’ve also worked on a guide to defensive publications and I am starting to think on how a tutorial might look like. I hope you will find that useful. I’ll write more about that next time!

If you are interested, don’t hesitate to join #linuxdefenders on the IRC freenode server.

  1. Since I passed the bar exam in December last year, I now have to fulfil two 6-month internships. ↩

  2. It’s patent EP0927945 The patent’s abstract begins like this: “A method and system for placing an order to purchase an item via the Internet.” This patent was filed at the European Patent Office in 1998. ↩

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Vienna » English  Fellowship Interviews  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  I love it here » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Jonas Öberg  Karsten on Free Software  Leena Simon» english  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nicolas Jean's FSFE blog » English  Paul Boddie's Free Software-related blog » English  Pressreview  Saint's Log  Sam Tuke » Free Software  Sam Tuke's blog  Seravo  Supporting Free Software » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Thoughts in Parentheses » Free Software  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes» Free Software  Valentin Rusu » fsfe  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  blog » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos  pb's blog  pichel's blog  rieper|blog » en  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog