Planet Fellowship (en)

Thursday, 02 July 2015

Continuous integration testing for WordPress plugins on Github using Travis CI

Seravo | 12:04, Thursday, 02 July 2015



We have open sourced and published some plugins on We only publish them to and do the development in Github. Our goal is to keep them simple but effective. Quite a few people are using them actively and some of them have contributed back by creating additional features or fixing bugs/docs. It’s super nice to have contributions from someone else but it’s hard to see if those changes break your existing features. We all do mistakes from time to time and it’s easier to recover if you have good test coverage. Automated integration tests can help you out in these situations.

Choosing Travis CI

As we use for hosting our code and wanted a tool which integrates really well with Github. Travis works seamlessly with Github and it’s free to use in open source projects. Travis gives you ability to run your tests in coordinated environments which you can modify to your preferences.


You need to have a Github account in order to setup Travis for your projects.

How to use

1. Sign up for free Travis account

Just click the link on the page and enter your Github credentials

2. Activate testing in Travis. Go to your accounts page from right corner.


Then go to your Organisation page (or choose a project of your own) and activate the projects you want to be tested in Travis.


3. Add .travis.yml into the root of your project repository. You can use samples from next section.


After you have pushed to Github just wait for couple of seconds and your tests should activate automatically.

Configuring your tests

I think the hardest part of Travis testing is just getting started. That’s why I created testing template for WordPress projects. You can find it in our Github repository. Next I’m going to show you a few different cases of how to use Travis. We are going to split tests into unit tests with PHPUnit and integration tests with RSpec, Poltergeist and PhantomJS.

#1 Example .travis.yml, use Rspec integration tests to make sure your plugin won’t break anything else

This is the easiest way to use Travis with your WordPress plugin. This installs latest WP and activates your plugin. It checks that your frontpage is working and that you can log into admin panel. Just drop this .travis.yml  into your project and start testing :)!

sudo: false
language: php

  on_success: never
  on_failure: change

  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

    - php: nightly

  - git clone wp-tests
  - bash wp-tests/bin/ test root '' localhost $WP_VERSION

  - cd wp-tests/spec && bundle exec rspec test.rb

#2 Example .travis.yml, which uses phpunit and rspec integration tests

  1. Copy phpunit.xml and tests folder from: into your project

  2. Edit tests/bootstrap.php line containing PLUGIN_NAME according to your plugin:

  1. Add .travis.yml file

sudo: false
language: php

  on_success: never
  on_failure: change

  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

    - php: nightly

  # Install composer packages before trying to activate themes or plugins
  # - composer install
  - git clone wp-tests
  - bash wp-tests/bin/ test root '' localhost $WP_VERSION

  - phpunit
  - cd wp-tests/spec && bundle exec rspec test.rb

For this to be useful you need to add the tests according to your plugin.

To get you started see how I did it for our plugins HTTPS Domain Alias & WP-Dashboard-Log-Monitor.

Few useful links:

If you want to contribute for better WordPress testing put an issue or pull request in our WordPress testing template.

Seravo can help you using PHPUnit, Rspec and Travis in your projects,
please feel free to ask us about our WordPress testing via email at or in the comment section below.


Applying the most important lesson for non-developers in Free Software through Roundcube Next

freedom bits | 08:01, Thursday, 02 July 2015

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

Monday, 29 June 2015

The buildscript

Told to blog - Entries tagged fsfe | 17:20, Monday, 29 June 2015

At the start of this month I deployed the new build script on the running test instance of
I'd like to give an overview over its features and limitations. In the end you should be able to understand the build logs on our web server and to test web site changes on your own computer.

General Concept

The new build script (let's call it the 2015 revision) emulates the basic behaviour of the old (around 2002ish revision) of the build script. The rough idea is that the web site starts as a collection of xhtml files, which get turned into html files.

The Main build

A xhtml file on the input side contains a page text, usually a news article or informational text. When it is turned into its corresponding html output, it will be enriched with menu headers, the site footer, tag based cross-links, etc. In essence however it will still be the same article and one xhtml input file normally corresponds to one html output file. The rules for the transition are described using the xslt language. The build script finds the transition rules for each xhtml file in a xsl file. Each xsl file will normally provide rules for a number of pages.

Some xhtml files contain special references which will cause the output to include data from other xhtml and xml files. I.e. the news page will contain headings from all news articles and the front page has some quotes rolling through, which are loaded from a different file.

The build script coordinates the tools which perform the build process. It selects xsl rules for each file, handles different language versions, and the fallback for non-existing translations, collects external files for inclusion into a page, and calls the XSLT processor, RSS generator, etc.

Files like PNG images and PDF documents simply get copied to the output tree.

The pre build

Aside from commiting images, changing XML/XHTML code, and altering inclusion rules, authors have the option to have dynamic content generated at build time. This is mostly used for our PDF leaflets but occasionally comes in handy for other things as well. At different places the source directory contains some files called Makefile. Those are instruction files for the GNU make program, a system used for running compilers and converters to generate output files from source code. A key feature of make is, that it regenerates output files only if their source code has changed since the last generator run.
GNU make is called by the build script prior to its own build run. This goes for both, the 2002 and 2015 revision of the build script. Make itself runs some xslt-based conversions and PDF generators to set up news items for later processing and to build PDF leaflets. The output goes to the websites source tree for later processing by the build script. When building locally you must be careful, not to commit generated files to the SVN repository.

Build times

My development machine "Vulcan" uses relatively lightweighted hardware by contemporary standards: an Intel Celeron 2955U with two Haswell CPU Cores at 1.4 GHz and a SSD for mass storage.
I measured the time for some build runs on this machine, however our web server "Ekeberg" despite running older hardware seems to perform slightly faster. Our future web server "Claus" which isn't yet productively deployed seems to work a little slower. The script performs most tasks multi threaded and can profit greatly from multiple CPU cores.

Pre build

The above mentioned pre build will take a long time when it is first run. However once its output files are set up, they will hardly be ever altered.

Initial pre build on Vulcan: ~38 minutes
Subsequent pre build on Vulcan: < 1 minute

2002 Build script

When the build script is called it first runs the pre build. All timing tests listed here where performed after an initial pre build. This way, as in normal operation, the time required for the pre build has an almost negligible impact on the total build time.

Page rebuild on Vulcan: ~17 minutes
Page rebuild on Ekeberg: ~13 minutes

2015 Build script

The 2015 revision of the build script is written in Shell Script while the 2002 implementation was in Perl. The Perl script used to call the XSLT processor as a library and passed a pre parsed XML tree into the converter. This way it was able to keep a pre parsed version of all source files in cache which was advantageous as it saved reparsing a file which would be included repeatedly. For example news collections are included in differnet places on the site and missing language versions of an article are usually all filled with the same english text version while retaining menu links and page footers in their respective translation.
The shell script does not ever parse an XML tree itself, instead it uses quicker shortcuts for the few XML modifications it has to perform. This means however, that it has to pass raw XML input to the XSLT program, which then has to perform the parsing over and over again. On the plus site this makes operations more atomic from the scripts point of view, and aids in implementing a dependency based build which can save it completely from rebuilding most of the files.

For performing a build, the shell script first calculates a dependency vector in the form of a set of make rules. It then uses make to perform the build. This dependency based build is the basic mode of operation for the 2015 build script.
This can still be tweaked: When the build script updates the source tree from our version control system it can use the list of changes to update the dependency rules generated in a previous build run. In this differential build even the dependency calculation is limited to a minimum with the resulting build time beeing mostly dependent on the actual content changes.

timings taken on the development machine Vulcan
Dependency build, initial run: 60+ minutes
Dependency build, subsequent run: ~9 to ~40 minutes
Differential build: ~2 to ~40 minutes

Local builds

In the simplest case you check out the web page from subversion, choose a target directory for the output and and build from the source directory directly into the target. Note that in the process the build script will create some additional files in the source directory. Ideally all of those files should be ignored by SVN, so they cannot be accidentally commited.

There are two options that will result in additional directories being set up beside the output.

  1. You can set up a status directory, where log files for the build will be placed. In case you are running a full build this is recommendable because it allows you to introspect the build process. The status directory is also required to run the differential builds. If you do not provide a status directory some temporary files will be created in your /tmp directory. The differential builds will then behave identical to the regular dependency builds.
  2. You can set up a stage directory. You will not normally need this feature unless you are building a live web site. When you specify a stage directory, updates to the website will first be generated there and only after the full build the stage directory will be synchronised into the target folder. This way you avoid having a website online that is half deprecated and half updated. Note that even the choosen synchronisation method (rsync) is not fully atomic.

Full build

The full build is best tested with a local web server. You can easily set one up using lighttpd. Set up a config file, e.g. ~/lighttpd.conf:

server.modules = ( "mod_access" )
$HTTP["remoteip"] !~ "" {
  url.access-deny = ("")    # prevent hosting to the network

# change port and document-root accordingly
server.port                     = 5080
server.document-root            = "/home/fsfe/"
server.errorlog                 = "/dev/stdout"
server.dir-listing              = "enable"
dir-listing.encoding            = "utf-8"
index-file.names                = ("index.html", "index.en.html")

include_shell "/usr/share/lighttpd/"

Start the server (I like to run it in foreground mode, so I can watch the error output):

/usr/sbin/lighttpd -Df ~/lighttpd.conf

...and point your browser to http://localhost:5080

Of course you can configure the server to run on any other port. Unless you want to use a port number below 1024 (e.g. the standard for HTTP is port 80) you do not need to start the server as root user.

Finally build the site:

~/fsfe-trunk/build/ -statusdir ~/status/ build_into /home/fsfe/

Testing single pages

Unless you are interested in browsing the entire FSFE website locally, there is a much quicker way, to test changes you make to one particular page, or even to .xsl files. You can build each page individually, exaclty as it would be generated during the complete site update:

~/fsfe-trunk/build/ process_file ~/fsfe-trunk/some-document.en.xhtml > ~/some-document.html

The resulting file can of course be opened directly. However since it will contain references to images and style sheets, it may be useful to test it on a local web server providing the referenced files (that is mostly the look/ and graphics/ directory).

The status directory

There is no elaborate status page yet. Instead we log different parts of the build output to different files. This log output is visible on

File Description
Make_copy The file is part of the generated make rules, it all rules for files that are just copied as they are. The file may be reused in the differential build.
Make_globs Part of the generated make rules. The file contains rules for preprocessing XML file inclusions. It may be reused during the differential build.
Make_sourcecopy Part of the generated make rules. Responsible for copying xhtml files to the source/ directory of the website. May be reused in differential build runs.
Make_xhtml Part of the generated make rules. Contains the main rules for XHTML to HTML transitions. May be reused in differential builds.
Make_xslt Part of generated make rules. Contains rules for tracking interdependencies between XSL files. May be reused in differential builds.
Makefile All make rules. This file is a concatenation of all rule files above. While the differential build regenerates the other Make_-files selectively, this one gets always assembled from the input files which may or may not have been reused in the process. The make program which builds the site uses this file. Note the time stamp: this file is the last one to be written into, before make takes over the build.
SVNchanges List of changes pulled in with the latest SVN update. Unfortunately it gets overwritten with every non-successful update attempt (normally every minute).
SVNerrors SVN error output. Should not contain anything ;-)
buildlog Output of the make program performing the build. Possibly the most valuable source of information when investigating a build failure. The last file to be written to during the make run.
debug Some debugging output of the build system, not too informative because it is used very sparsely.
lasterror Part of the error output. Gets overwritten with every run attempt of the build script.
manifest List of all files which should be contained in the output directory. Gets regenerated for every build along with the Makefile. The list is used for removing obsolete files from the output tree.
premake Output of the make-based pre build. Useful for investigating issues that come up during this stage.
removed List of files that were removed from the output after the last run. That is, files that have been part of a previous website revision but do no longer appear in the manifest.
stagesync Output of rsync when copying from a stage directory to the http root. Basically contains a list of all updated, added, and removed files.


Roughly in that order:

  • move *.sources inclusions from xslt logic to build logic
  • split up translation files
    • both steps will shrink the dependency network and give build times a more favourable tendency
  • deploy on productive site
  • improve status output
  • auto detect build requirements on startup (to aid local use)
  • add support for markdown language in documents
  • add sensible support for other, more distinct, language codes (e.g. pt and pt-br)
  • deploy on DFD site
  • enable the script to remove obsolete directories (not only files)

Friday, 26 June 2015

splitDL – Downloading huge files from slow and unstable internet connections

Max's weblog » English | 15:59, Friday, 26 June 2015

Imagine you want install GNU/Linux but your bandwidth won’t let you…

tl;dr: I wrote a rather small Bash script which splits huge files into several smaller ones and downloads them. To ensure the integrity, every small files is being checked for its hashsum and file size.

That’s the problem I was facing in the past days. In the school I’m working at (Moshi Institute of Technology, MIT) I set up a GNU/Linux server to provide services like file sharing, website design (on local servers to avoid the slow internet) and central backups. The ongoing plan is the setup of 5-10 (and later more) new computers with a GNU/Linux OS in contrast to the ancient and non-free WindowsXP installations – project „Linux Classroom“ is officially born.

But to install an operating system on a computer you need an installation medium. In the school a lot of (dubious) WindowsXP installation CD-ROMs are flying around but no current GNU/Linux. In the first world you would just download an .iso file and ~10 minutes later you could start installing it on your computer.

But not here in Tanzania. With download rates of average 10kb/s it needs a hell of a time to download only one image file (not to mention the costs for the internet usage, ~1-3$ per 1GB). And that’s not all: Periodical power cuts cancel ongoing downloads abruptly. Of course you can restart a download but the large file may be already damaged and you loose even more time.

My solution – splitDL

To circumvent this drawback I coded a rather small Bash program called splitDL. With this helper script, one is able to split a huge file into smaller pieces. If during the download the power cuts off and damages the file, one just has to re-download this single small file instead of the huge complete file. To detect whether a small file is unharmed the script creates hashsums of the original huge and the several small files. The script also supports continuation of the download thanks to the great default built-in application wget.

You might now think „BitTorrent (or any other program) is also able to do the same, if not more!“. Yes, but this requires a) the installation of another program and b) a download source which supports this protocol. On the contrary splitDL can handle every HTTP, HTTPS or FTP download.

The downside in the current state is that splitDL requires shell access to the server where the file is saved to be able to split the file and create the necessary hashsums. So in my current situation I use my own virtual server in Germany on which I download the wanted file with high-speed and then use splitDL to prepare the file for the slow download from my server to the Tanzanian school.

The project is of course still in an ongoing phase and only tested in my own environment. Please feel free to have a look at it and download it via my Git instance. I’m always looking forward to feedback. The application is licensed under GPLv3 or later.

Some examples


Split the file debian.iso into smaller parts with the default options (MD5 hashsum, 10MB size) -m server -f debian.iso

Split the file but use the SHA1 hashsum and split the file into pieces with 50MB. -m server -f debian.iso -c sha1sum -s 50M

After one of the commands, a new folder called dl-debian.iso/ will be created. There, the splitted files and a document containing the hashsums and file sizes are located. You just have to move the folder to a web-accessible location on your server.


Download the splitted files with the default options. -m client -f http://server.tld/dl-debian.iso/

Download the splitted files but use SHA1 hashsum (has to be the same than what was used on the creation process) and override the wget options (default: -nv –show-progress). -m client -f http://server.tld/dl-debian.iso/ -c sha1sum -w --limit-date=100k

Current bugs/drawbacks

  • Currently only single files are possible to split. This will be fixed soon.
  • Currently the script only works with files in the current directory. This is also only a matter of some lines of code.

Wednesday, 24 June 2015

Lots of attention for Oettinger’s transparency problem

Karsten on Free Software | 07:52, Wednesday, 24 June 2015

It seems I’m not the only one interested in who the European Commissioner for Digital Economy and Society, Guenther H. Oettinger, is meeting with.

This morning, Spiegel Online is running a thorough piece (in German, natch). Politico Europe has the story, too. And Transparency International is launching EU Integrity Watch, making it easy to see who in the Commission’s upper echelons has been seeing which interest representatives.

The upshot of all this? Oettinger is meeting almost exclusively with corporate lobbyists, or with people acting on behalf of corporates. According to Spiegel Online’s figures, 90% of the Commissioner’s meetings were with corporate representatives, business organisations, consultancies and law firms. Only 3% of his meetings were with NGOs. Of the top ten organisations he’s meeting with, seven are telecoms companies, most of whom are staunchly opposed to net neutrality.

Oettinger and Commission Vice President Ansip launched the Digital Single Market (DSM) package on May 6, including some last-minute tweaks that pleased broadcasters. During the weeks before the publication of the DSM package, Oettinger met with lots of broadcasters. His office also neglected to publish those meetings until we started pushing for the data to be released. Even in a Commission where three quarters of the meetings involve corporate lobbyists, Oettinger’s one-sidedness sticks out like a sore thumb.

With its decision to publish at least high-level meetings, the Commission has taken a commendable step to shed some light on the flows of influence in Brussels. In this regard, it is far ahead of most national governments other than those of the Scandinavian countries. Now that at least a bit of sunlight is flowing in, it’s easier to see the ugly spots. Oettinger clearly needs to be much more balanced in selecting his meeting partners. His superiors in the Commission need to make sure that he improves.

Thursday, 18 June 2015

Farewell, for now

Karsten on Free Software | 12:07, Thursday, 18 June 2015

This is a blog post that I’m writing with a wistful smile. About two years ago, I decided that I would eventually move on from my role as FSFE’s president. We’ve been preparing the leadership transition ever since. Now the time has come to take one of the larger steps in that process.

Today is my last day actively handling operations at FSFE.

Since our Executive Director Jonas Öberg came on board in March, I have progressively been handing FSFE’s day-to-day management over to him. From tomorrow, our Vice President Matthias Kirschner will take over my responsibility for our policy work.

I’m going to remain FSFE’s president until September. But I’m finally enjoying a luxury that was out of reach in previous years: I’m taking two months of parental leave, until mid-August. At FSFE’s General Assembly in September, we will elect my successor.

It’s been six intense and amazing years since I took over FSFE’s presidency. The organisation has grown a lot, and matured a lot. Our team has worked incredibly hard to promote Free Software, and to put users in control of technology. As I’m preparing to move on, I know that I’m leaving FSFE in great shape, and in very competent hands.

Being FSFE’s president is a great job. I’ve always considered that one of the main perks is the exceptional people I got to work with every day. What they all have in common is that they never give up. Each in their own way, they will bang their heads against the world until the world gives way. It’s time to publicly thank some of them.

Jamie Love is a prime example of that sort of person. Though he might not know it, he has taught me a lot about campaigning for a better, fairer society.

When I took the helm at FSFE, money was tight. We sometimes didn’t know how to pay next month’s salaries, and that was pretty scary. That the organisation the in excellent financial shape today is in no small part due to our Financial Officer Reinhard Müller, with his sound judgement and firm grip of the purse strings.

Carlo Piana and Till Jaeger are great lawyers to have on your side. (Much better than having them against you. Just ask Microsoft.) They have been extremely generous with their time, knowledge and skills.  Shane Coughlan doesn’t so much face down adversity as talk to it gently, pour it a drink of whisky, take it for a walk, and hit it firmly over the head in a dark alley.

I’ve had help from a lot of people in lobbying for Free Software in Brussels. Not all of them would benefit if I thanked them publicly. So I’ll ask Erik Josefsson to stand in for them. He has been a relentless and resourceful advocate for software freedom in the European Parliament, and has received far less gratitude than he’s due.

FSFE’s Legal Coordinator Matija Šuklje is a universal geek in the very best way, and a close friend. He’s about to become a fine lawyer, though if he were to limit himself to lawyering, that would be an awful waste of talent. Fortunately, there’s not too much risk of that happening, as he’s never quite able to keep his nose out of anything that he finds even vaguely interesting.

Having Jonas Öberg take on the Executive Director role has been like putting a new gearbox into the organisation’s machinery. Everything has started to move more smoothly and efficiently.

Matthias Kirschner has been working with me all these years. He has moved through various roles in FSFE, continously taking on more responsibility. He’s had a great part in shaping our success. He is energetic, creative and razor sharp, and a very good friend to me. We couldn’t ask for anyone more skilled and dedicated to take over the role of FSFE’s President a few months from now, and I’m confident that the members of our General Assembly will agree.

These have been very good years for Free Software, for FSFE, and for me personally. As I move to a more supervisory role as a member of FSFE’s General Assembly, I look forward to seeing the seeds grow that we’ve planted..

And with that, I’d like to sign off for the summer. I’m still finalising what my next job is going to be – that’s something I’ll work out during a long vacation with my family, who deserve far more of my time than they’ve been getting.

See you on the other side!

Tuesday, 16 June 2015

You can learn a lot from people’s terminology

Paul Boddie's Free Software-related blog » English | 21:23, Tuesday, 16 June 2015

The Mailpile project has been soliciting feedback on the licensing of their product, but I couldn’t let one of the public responses go by without some remarks. Once upon a time, as many people may remember, a disinformation campaign was run by Microsoft to attempt to scare people away from copyleft licences, employing insensitive terms like “viral” and “cancer”. And so, over a decade later, here we have an article employing the term “viral” liberally to refer to copyleft licences.

Now, as many people already know, copyleft licences are applied to works by their authors so that those wishing to contribute to the further development of those works will do so in a way that preserves the “share-alike” nature of those works. In other words, the recipient of such works promises to extend to others the privileges they experienced themselves upon receiving the work, notably the abilities to see and to change how it functions, and the ability to pass on the work, modified or not, under the same conditions. Such “fair sharing” is intended to ensure that everyone receiving such works may be equal participants in experiencing and improving the work. The original author is asking people to join them in building something that is useful for everyone.

Unfortunately, all this altruism is frowned upon by some individuals and corporations who would prefer to be able to take works, to use, modify and deploy them as they see fit, and to refuse to participate in the social contract that copyleft encourages. Instead, those individuals and corporations would rather keep their own modifications to such works secret, or even go as far as to deny others the ability to understand and change any part of those works whatsoever. In other words, some people want a position of power over their own users or customers: they want the money that their users and customers may offer – the very basis of the viability of their precious business – and in return for that money they will deny their users or customers the opportunity to know even what goes into the product they are getting, never mind giving them the chance to participate in improving it or exercising control over what it does.

From the referenced public response to the licensing survey, I learned another term: “feedstock”. I will admit that I had never seen this term used before in the context of software, or I don’t recall its use in such a way, but it isn’t difficult to transfer the established meaning of the word to the context of software from the perspective of someone portraying copyleft licences as “viral”. I suppose that here we see another divide being erected between people who think they should have most of the power (and who are somehow special) and the grunts who merely provide the fuel for their success: “feedstock” apparently refers to all the software that enables the special people’s revenue-generating products with their “secret ingredients” (or “special sauce” as the author puts it) to exist in the first place.

It should be worrying for anyone spending any time or effort on writing software that by permissively licensing your work it will be treated as mere “feedstock” by people who only appreciate your work as far as they could use it without giving you a second thought. To be fair, the article’s author does encourage contributing back to projects as “good karma and community”, but then again this statement is made in the context of copyleft-licensed projects, and the author spends part of a paragraph bemoaning the chore of finding permissively-licensed projects so as to not have to contribute anything back at all. If you don’t mind working for companies for free and being told that you don’t deserve to see what they did to your own code that they nevertheless couldn’t get by without, maybe a permissive licence is a palatable choice for you, but remember that the permissive licensing will most likely be used to take privileges away from other recipients: those unfortunates who are paying good money won’t get to see how your code works with all the “secret stuff” bolted on, either.

Once upon a time, Bill Gates remarked, “A lot of customers in a sense don’t want — the notion that they would go in and tinker with the source code, that’s the opposite of what’s supposed to go on. We’re supposed to give that to them and it’s our problem to make sure that it works perfectly and they rely on us.” This, of course, comes from a man who enjoys substantial power through accumulation of wealth by various means, many of them involving the denial of choice, control and power to others. It is high time we all stopped listening to people who disempower us at every opportunity so that they can enrich themselves at our expense.

Getting official on Oettinger’s lobbyist meetings [Update]

Karsten on Free Software | 04:22, Tuesday, 16 June 2015

We’ve been looking at how EU Commissioner Günther Oettinger is handling transparency on his meetings with lobbyists. Turns out, not very well. In response to our informal questions, the Commission updated the lists of meetings over the weekend.

There are two lists of meetings with interest representatives. The one for Oettinger’s cabinet (i.e. his team) looks reasonably complete. The list for the Commissioner himself, however, is a different story. Apparently, we are to believe that he has met just six lobbyists in the past four months. That would be an astounding lack of activity for the person in charge of some of the EU’s most contested policy issues: data protection, copyright reform and net neutrality.

[UPDATE: Oettinger's team has continued to add meetings to the list. Both the lists for Oettinger himself and for his team look pretty reasonable now, at least up to the beginning of June. The most recent published meeting was on June 3, almost three weeks ago. There is a significant and unexplained gap in Oettinger's list during April, with just one meeting listed for the period between April 1 and April 20. Oettinger's Head of Cabinet, Michael Hager, has written to me and explained that a long-term sickness leave in the cabinet has led to a delay in publishing the meetings.]

Since the Commission apparently doesn’t fully respond to informal prodding, the excellent Kirsten Fiedler over at EDRi has filed an official request for access to documents. If you, like me, are curious about what’s keeping the EU’s digital czar busy, you can follow the request at

In addition to the full list of meetings, it would be interesting to know what guidelines Oettinger and his team use when it comes to transparency on less formal meetings with lobbyists. Presumably, the Commissioner meets people not just at his office, but also at the events where he frequently appears as a speaker. This is a substantial hole in the Commission’s own transparency policy, and I’d love to know how  Oettinger and his team are planning to fill it.

Friday, 12 June 2015

Oettinger’s transparency problem, part II

Karsten on Free Software | 07:31, Friday, 12 June 2015

On Wednesday, I pointed out that Commissioner Oettinger, who handles matters of digital economy and society for the European Commission, has not been keeping up on publishing his meetings with lobbyists.

Though the Commission’s generic inquiry team hasn’t gotten back to me yet, I though I’d accelerate the process a little. So I’ve now sent a mail to the cabinet of First Vice President Timmermans, who is in charge of coordinating the Commission’s work on transparency:

Dear Ms Sutton,
dear Messrs Timmermans and Hager,

I am writing to you with a question regarding the publication of meetings by Commissioner Oettinger and his cabinet with interest representatives.

When the Commission announced on November 25, 2014, that it would henceforth publish all meetings that the Commissioners and their teams held with interest representatives, this was universally welcomed. Especially civil society organisations like FSFE greeted this announcement as an important step which would serve to increase the trust of European citizens in the Commission.

At the Free Software Foundation Europe, we are highly appreciative of the results this has brought, and we value the efforts made by the Commission to increase transparency through this mechanism.

So it is with some regret that we feel the need to point out that, while most Commissioners and teams publish their meetings with great diligence, some are unfortunately not living up to the expectations set by the Commission’s announcement from November.

Given FSFE’s focus on matters of digital policy, we see a specific need to highlight that Commissioner Oettinger and his cabinet appear to be somewhat behind on publishing their meetings on the relevant web pages.

The last meeting which Commissioner Oettinger has published took place on February 20, 2015:

The most recent published meeting of one of the Commissioner’s team members took place on March 25, 2015:

Given that Commissioner Oettinger is heading up, together with Vice President Ansip, one of the Commission’s flagship initiatives (which, accordingly, is hotly contested), the Digital Single Market, we submit that the highest standards of transparency on meetings should be applied here.

We hope that the Commission, and in particular Commissioner Oettinger and his cabinet, will see themselves able to update the published list of meetings at the earliest opportunity. We believe that a formal Request for Access to Documents is not needed to deal with such a routine matter, and are confident that the Commission will quickly move to rectify the situation.

In closing, please let me reiterate FSFE’s deep appreciation of the Commission’s commitment to transparency.


Karsten Gerloff
President, Free Software Foundation Europe

Let’s see if this brings any results. If not, there’s always the formal request for access to documents. But if we had to resort to this tool in order to remind the Commission of its own commitments, that would amount to a confession of failure on the part of the EC.

Thursday, 11 June 2015

FSFE Fellowship and at Veganmania in Vienna 2015

FSFE Fellowship Vienna » English | 19:10, Thursday, 11 June 2015

Gregor mans the information desk
Martin not yet behind the information desk
René fully engaged
The four meter long information desk

This year’s vegan summer festival in Vienna once more was bigger than ever before. It not only lasted four days but it doubled in size also. Last year 35 exhibitors where present. From Wednesday 3rd to Saturday 6th of June 2015 no less than 70 organisations and companies had set up their stalls in front of the Museumsquartier (MQ), opposite the famous museums of art history and natural history.

But not only the festival itself has got bigger, our already tratitional information stand was also larger. We were given more space and could therefore offer about four meters of tightly packed information material for a total of 50 hours (excluding breaks). Unfortunately, beside me, only Gregor was available from our Fellows to man the stall. He came on Wednesday and Thursday. Luckily this didn’t cause serious problems since we encountered unexpected help from other people later on.

Martin has been using Free Software for quite a while and visited our stalls on different occasions over the years. So I knew that he is very knowledgable about the issues we usually speak to people about. When I asked him for help he instantly shifted his time table and jumped in when I needed to rush into the Radio Orange studio for a live show about the festival itself. Not all people feel comfortable doing the technical side of live radio shows. Even if they are very easy, which is true in this case.

Radio Orange is an interesting subject in its own right. Austria was quite late with liberating the radio licenses. One of the first free radios was Radio Orange (o94) and it is set up completely with Free Software. I am constantly amazed how well this is done. Hundreds of very different people are using its setup on a regular basis. Some are more frequent than others. Some are very computer savy. Others avoid computers altogether. But the obviously very skilled technicans, who built and administrate the radio’s setup, manage to give all these very different people a good experience. I’ve been helping with two shows for quite a time now and everything runs 24/7. People doing their own shows, just enter the live studio, and start talking at the right time. It’s as easy as that. Pre-recorded shows are done similarly effortlessly. It’s possible to upload shows beforehand and they get aired at the right time automatically. Heck, there is even an automatic replacement if someone doesn’t show up or forgot to upload anything. I don’t know any other example of a complicated system with such a wide range of user types running this smoothly. Of course I have encountered glitches from time to time, but they where small and dealt with fast. This is an impressive example how powerful and reliable Free Software can be.

Back to the festival: Martin did a great job manning the stall while I was away for the radio show. When I was back he even stayed longer to support the stall. Many friends of mine visited me on the stall, but there wasn’t much chance to talk to me. I was involved in interesting, engaging conversations about Free Software with normal visitors of our stall virtullay the whole time. So often my friends didn’t even get the chance to talk to me and went away after a while of waiting for me to become available again. Even when more than one person was manning our information desk sometimes people didn’t get the chance to talk to us because there was more demand than we could meet.

On Friday René from Frankfurt, Germany showed up. Originally he had just made the journey to visit the Veganmania festival. He had his luggage with him and got stuck at our desk. In the beginning he just was a normal visitor but after a while he stepped in because there where so many people who wanted to ask questions and he obviously could answer them. In the end he helped with manning our desk until Saturday night. Happy with his competent help, I invited him to stay at my place and we had a great time discussing Free Software ideas until late in the night. So we didn’t get much sleep because we set up the stall about 9:30am each day and stayed there until 10pm. Unfortunately, we got on so well that he missed his train on Sunday. Therefore he had to endure an unpleasant train ride back home without the possibility to sleep.

I appreciate the unexpected help of Martin and René and hope they will stick around. Both ensured me they loved the experience and they want to do it in future again.

As usual, many people got lured to our desk because of the Free Your Android poster. Others just dropped by in order to find out what this all was about since they didn’t expect our subject on a vegan summer festival. But of course it was easy to explain how activism and Free Software relate to each other. In the end we ran out of several leaflets and stickers. In the hot weather we didn’t manage to sell the last of our Fellowship hoodies, but we sold some “there is no cloud …” bags and also received a donation.

The information desk marathon left us with a considerably smaller leaflet stack, brown skin like after two weeks of holidays and many great memories from discussions with very different people. The Veganmania summer festivals in Vienna are clearly worth the effort. We even got explicitly invited to join the vegan summer festival in Graz in September since the organising people figured they wanted to have someone informing about Free Software there also. I guess it is not necessary for us to travel to Graz since I’m told there are dedicated Free Software advocates there too.

Wednesday, 10 June 2015

Oettinger has a transparency problem

Karsten on Free Software | 12:37, Wednesday, 10 June 2015

In November of last year, the European Commission loudly trumpeted a new-found commitment to transparency. In a press release, it said that from now on, all meetings between Commissioners, their team members (the “cabinet”) and interest representatives would be made public. I was always curious how well this promise would hold up in practice.

Not very well, it seems now. The meeting pages for the EU Commissioner for Digital Economy and Society, Günther Oettinger, look rather deserted. If the pages are to be believed, Oettinger last met anyone on February 20, while his cabinet members at least interacted with lobbyists until March 25.

Given that Oettinger appears to be alive and well, I am curious about what’s going on here. As a first step, I have informally contacted the Commission and requested that the pages be updated. If I don’t get a reply within the advertised three business days, I will file a request for access to documents.

Here’s my mail to the Commission:

Dear Madam, Sir,

on November 25, 2014, the Commission issued a press release [] announcing that all meetings by Commissioners and their cabinets would be made public on the Commission’s website.

It appears that in the case of Commissioner Oettinger, the EC has fallen behind somewhat on this commitment.

On the relevant web page, the latest meeting listed for Commissioner Oettinger took place on Feb. 20, 2015. The latest meeting listed for members of his cabinet was on March 25.

You will agree that this state of affairs is not satisfactory with regards to transparency. I would like to request that you provide me – and ideally my fellow citizens – with comprehensive information on the meetings held by Commissioner Oettinger after February 20, 2015, and those held by his cabinet members after March 25, 2015.

While I would be happy to receive this information by email, I would much prefer if the relevant web pages were simply updated to reflect the most recent meetings.

Thank you for your assistance.

With kind regards,
Karsten Gerloff

Tuesday, 09 June 2015

Firefox with Tor/Orbot on Android

Jens Lechtenbörger » English | 19:42, Tuesday, 09 June 2015

In my previous post, I explained three steps for more privacy on the Net, namely (1) opt out from the cloud, (2) encrypt your communication, and (3) anonymize your surfing behavior. If you attempt (3) via Tor on Android devices, you need to be careful.

I was surprised how complicated anonymized browsing is on Android with Firefox and Tor. Be warned! Some believe that Android is simply a dead end for anonymity and privacy, as phones are powerful surveillance devices, easily exploitable by third parties. An excellent post by Mike Perry explains how to harden Android devices.

Anyways, I’m using an Android phone (without Google services as explained elsewhere), and I want to use Tor for the occasional surfing while resisting mass surveillance. Note that my post is unrelated to targeted attacks and espionage.

The Tor port to Android is Orbot, which can potentially be combined with different browsers. In any case, the browser needs to be configured to use Tor/Orbot as proxy. Some browsers need to be configured manually, while others are pre-configured. At the moment, nothing works out of the box, though, as you can see in this thread on the Tor Talk mailing list.

Firefox on Android mostly works with Orbot, but downloads favicons without respecting proxy preferences, which is a known bug. In combination with Tor, this bug is critical, as the download of favicons reveals the real IP address, defeating anonymization.

Some guides for Orbot recommend Orweb, which has too many open issues to be usable. Lightning Browser is also unusable for me. Currently, Orfox is under development (a port of the Tor Browser to Android). Just as plain Firefox, though, Orfox deanonymizes Tor users by downloading favicons without respecting proxy preferences, revealing the real IP address.

The only way of which I’m aware to use Firefox or Orfox with Tor requires the following manual proxy settings, which only work over Wi-Fi.

  1. Connect to your Wi-Fi and configure the connection to use Tor as system proxy: Under the Wi-Fi settings, long-press on your connection, choose “Modify network” → “Show advanced options”. Select “Manual” proxy settings and enter localhost and port 8118 as HTTP proxy. (When you start Orbot, it provides proxy services into the Tor network at port 8118.)

  2. Configure Firefox or Orfox to use the system proxy and avoid DNS requests: Type about:config into the address bar and verify that network.proxy.type is set to 5, which should be the default and lets the browser use the system proxy (the system proxy is also used to fetch favicons). Furthermore, you must set network.proxy.socks_remote_dns to true, which is not the default. Otherwise, the browser leaks DNS requests that reveal your real IP address.

  3. Start Orbot, connect to the Tor network.

  4. Surf anonymized. At the moment you need to configure the browser’s privacy settings to clear private data on exit. Maybe you want to wait for an official Orfox release.

Three steps towards more privacy on the Net

Jens Lechtenbörger » English | 19:06, Tuesday, 09 June 2015

Initially, I wanted to summarize my findings concerning Tor with Firefox on Android. Then, I decided to start with an explanation why I care about Tor at all. The summary, that I had in mind initially, then follows in a subsequent post.

I belong to a species that appears to be on the verge of extinction. My species believes in the value of privacy, also on the Net. We did not yet despair or resign in view of mass surveillance and ubiquitous, surreptitious, nontransparent data brokering. Instead, we made a deliberate decision to resist.

People around us seem to be indifferent to mass surveillance and data brokerage. Recent empirical research indicates that they resign. In consequence, they submit to the destruction of our (their’s and, what they don’t realize, also mine) privacy. I may be an optimist in believing that my species can spread by the proliferation of simple ideas. This is an infection attempt.

Step 1. Opt-out of the cloud and piracy policies.

In this post, I use the term “cloud” as placeholder for convenient, centralized services provided by data brokers from remote data centers. Such services are available for calendar synchronization, file sharing, e-mail and messaging, and I recommend to avoid those services that gain access to “your” data, turn it into their data, generously providing access rights also to you (next to their business partners as well as intelligence agencies and other criminals with access to their infrastructure).

My main advice is simple, if you are interested in privacy: Opt out of the cloud. Do not entrust your private data (e-mails, messages, photos, calendar events, browser history) to untrustworthy parties with incomprehensible terms of service and “privacy” policies. The typical goal of a “privacy” policy is to make you renounce your right to privacy and to allow companies the collection and sale of data treasures based on your data. Thus, you should really think of a “piracy policy” whenever you agree to those terms. (By the way, in German, I prefer “Datenschatzbedingungen” to “Datenschutzbedingungen” towards the same end.)

Opting out of the cloud may be inconvenient, but is necessary and possible. Building on a metaphor that I borrow from Eben Moglen, privacy is an ecological phenomenon. All of us can work jointly towards the improvement of our privacy, or we can pollute our environment, pretending that we don’t know better or that each individual has none or little influence anyways.

While your influence may be small, you are free to choose. You may choose to send e-mails via some data broker. If you make that choice, then you force your friends to send replies intended for your eyes to your data broker, reducing their privacy. Alternatively, you may choose some local, more trustworthy provider. Most likely, good alternatives are available in your country; there certainly are some in Germany such as and Posteo (both were tested positively in February 2015 by Stiftung Warentest; in addition, I’m paying 1€ per month for an account at the former). Messaging is just the same. You are free to contribute to a world-wide, centralized communication monopoly, sustaining the opposite of private communication, or to choose tools and services that allow direct communication with your friends, without data brokers in between. (Or you could use e-mail instead.) Besides, you are free to use alternative search engines such as Startpage (which shows Google results in a privacy friendly manner) or meta search engines such as MetaGer or ixquick.

Step 2. Encrypt your communication.

I don’t think that there is a reason to send unencrypted communication through the Net. Clearly, encryption hinders mass surveillance and data brokering. Learn about e-mail self-defense. Learn about off-the-record (OTR) communication (sample tools at PRISM Break).

Step 3. Anonymize your surfing behavior.

I recommend Tor for anonymized Web surfing to resist mass surveillance by intelligence agencies as well as profiling by data brokers. Mass surveillance and profiling are based on bulk data collection, where it’s easy to see who communicates where and when with whom, potentially about what. It’s probably safe to say that with Tor it is not “easy” any more to see who communicates where and when with whom. Tor users do not offer this information voluntarily, they resist actively.

On desktop PCs, you can just use the Tor Browser, which includes the Tor software itself and a modified version of the Firefox browser, specifically designed to protect your privacy, in particular in view of basic and sophisticated identification techniques (such as cookies and various forms of fingerprinting).

On Android, Tor Browser does not exist, and alternatives need to be configured carefully, which is the topic for the next post.

Monday, 08 June 2015

Setting up a BeagleBone Black to flash coreboot

the_unconventional's blog » English | 13:30, Monday, 08 June 2015

As most readers will know, I like coreboot. Recently, I’ve successfully flashed a T60 and an X200 with a Raspberry Pi, but especially the latter was a cumbersome experience.

That pushed me towards buying a BeagleBone Black: Libreboot’s recommended hardware flasher. However, the BBB does not work out of the box. While Libreboot’s documentation is all right, it did take me a while to get everything to work as intended.


Initial boot

First, connect your BBB’s ethernet port to a router or switch. After the initial boot, it may take a while for all services to start, but eventually it should obtain a DHCP lease. Multicast is probably working on your network, so you can recognize the BBB as beaglebone.local.

Log in as root over SSH. There is no root password. Set a root password.

ssh root@beaglebone.local
passwd root

In case you have a BBB from element14, you’ll most likely have to fix a bug in the init script that prevents you from updating Debian.
(Aren’t init scripts great, systemd-haters? :)

Run nano /etc/init.d/ and change all contents with the following lines:

#!/bin/sh -e
# Provides:
# Required-Start:    $local_fs
# Required-Stop:     $local_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start LED aging
# Description:       Starts LED aging (whatever that is)

x=$(/bin/ps -ef | /bin/grep "[l]ed_acc")
if [ ! -n "$x" -a -x /usr/bin/led_acc ]; then
    /usr/bin/led_acc &

Now, update all packages and reboot the BBB.

apt-get update
apt-get dist-upgrade
systemctl reboot


Setting up spidev

First, create a new file called BB-SPI0-01-00A0.dts.

nano BB-SPI0-01-00A0.dts

Add the following lines and save the file:


/ {
    compatible = "ti,beaglebone", "ti,beaglebone-black";

    /* identification */
    part-number = "spi0pinmux";

    fragment@0 {
        target = <&am33xx_pinmux>;
        __overlay__ {
            spi0_pins_s0: spi0_pins_s0 {
                pinctrl-single,pins = <
                  0x150 0x30  /* spi0_sclk, INPUT_PULLUP | MODE0 */
                  0x154 0x30  /* spi0_d0, INPUT_PULLUP | MODE0 */
                  0x158 0x10  /* spi0_d1, OUTPUT_PULLUP | MODE0 */
                  0x15c 0x10  /* spi0_cs0, OUTPUT_PULLUP | MODE0 */

    fragment@1 {
        target = <&spi0>;
        __overlay__ {
             #address-cells = <1>;
             #size-cells = <0>;

             status = "okay";
             pinctrl-names = "default";
             pinctrl-0 = <&spi0_pins_s0>;

             spidev@0 {
                 spi-max-frequency = <24000000>;
                 reg = <0>;
                 compatible = "linux,spidev";

Then compile it with the device tree compiler.

dtc -O dtb -o BB-SPI0-01-00A0.dtbo -b 0 -@ BB-SPI0-01-00A0.dts

And copy it to /lib/firmware.

cp BB-SPI0-01-00A0.dtbo /lib/firmware/

Now enable the device tree overlay.

echo BB-SPI0-01 > /sys/devices/bone_capemgr.*/slots

Finally, set it up permanently by running nano /etc/default/capemgr and adding/changing the CAPE= line:



Getting Flashrom

Download the latest libreboot_util archive and extract it.

tar -xf libreboot_util.tar.xz

Go to the flashrom/armv7l directory and make flashrom executable.

cd libreboot_util/flashrom/armv7l/
chmod +x flashrom

Test Flashrom. (It should output an error, because it’s not connected to an EEPROM.)

./flashrom -p linux_spi:dev=/dev/spidev1.0,spispeed=512


Wiring up

If you’re like me and previously used a Raspberry Pi, you’ll have to make some modifications to your wires. The Raspberry Pi has male headers, while the BeagleBone Black has female headers.

Due to outside interference, it’s best to keep the wires as short as possible.

I had a flatcable for my RPi, so first I cut off some same-coloured wires and soldered them together at a reasonable length. Don’t forget to heat shrink everything.
(Skip this step if you already have F-F wires of reasonable length.)

If you have them, it's really handy to put numbered labels on both ends.

If you have F-F wires, one end will need to get a sex change*. I used some steel wire and soldered it into the female connectors. Then I added more heat shrink tubing.

Just add a drip of tin and let them cool off.

* No GNOME developers were harmed in the process.

They should end up looking something like this. Use a multimeter to make sure the wires conduct properly!

Now we have the BBB wires, it’s time to make PSU wires.

I took these wires from an old PC speaker.

You can use regular indoor electrical wires and add a female header to the brown wire and a male header to the blue wire. These wires do not have to be very short. In fact, it’s easier to keep them a bit longer.

Solder brown to red and blue to black. Then heat shrink as usual.

For the black wire, you’ll have to do another sex change if you used a female connector. The red wire goes on the clip, so leave that one as it is.


Pinouts and power supplies

Add the male wires to the BBB’s P9 header following this diagram:

If you touch it, it gets bigger!

Although the BBB has its own 3.3V rail, it’s considered best practice to use an external power supply. You can easily use a regular ATX or SFX PSU for this.

I'm using an SFX PSU for easier transportation.

All you have to do is hotwire it by shorting the green wire with one of the ground wires. This can be done with any conductive wire. (Like a paperclip op a hairpin.)

Short pins 4 (green) and 5 (black) on the top row.

After that, you can connect the +3.3V pin to one of the orange/brown wires and the GND pin to one of the black wires. I’d recommend using pins 2 and 3.

Only connect the brown wire to one of the +3.3V pins. Nothing else! Yellow, red and/or purple pins will KILL EVERYTHING.

You may want to add some tin and some heat shrink tubing on the ends of the electrical wires to make them clamp into the ATX plug.


A slight fluctuation (<100mV) isn't going to be harmful.


Once you’ve triple checked everything, connect the PSU’s +3.3V pin to the SOIC clip and the PSU’s GND to pin #2 on the BBB.

It ends up looking something like this.

Boot up the BBB and then power up the PSU. If nothing catches fire, it’s time to do a victory dance!

Quick Look: Dell XPS 13 Developer Edition (2015) with Ubuntu 14.04 LTS

Losca | 16:55, Monday, 08 June 2015

I recently obtained the newest Dell's Ubuntu developer offering, XPS 13 (2015, model 9343). I opted in for FullHD non-touch display, mostly because of better battery life, the actual no need for higher resolution, and matte screen which is great outside. Touch would have been "nice-to-have", but in my work I don't really need it.

The other specifications include i7-5600U CPU, 8GB RAM, 256GB SSD [edit: lshw], and of course Ubuntu 14.04 LTS pre-installed as OEM specific installation. It was not possible to directly order it from Dell site, as Finland is reportedly not online market for Dell... The wholesale company however managed to get two models on their lists and so it's now possible to order via retailers. [edit: here are some country specific direct web order links however US, DE, FR, SE, NL]

In this blog post I give a quick look on how I started up using it, and do a few observations on the pre-installed Ubuntu included. I personally was interested in using the pre-installed Ubuntu like a non-Debian/Ubuntu developer would use it, but Dell has also provided instructions for Ubuntu 15.04, Debian 7.0 and Debian 8.0 advanced users among else. Even if not using the pre-installed Ubuntu, the benefit from buying an Ubuntu laptop is obviously smaller cost and on the other hand contributing to free software (by paying for the hardware enablement engineering done by or purchased by Dell).


The Black Box. (and white cat)

Opened box.

First time lid opened, no dust here yet!
First time boot up, transitioning from the boot logo to a first time Ubuntu video.
A small clip from the end of the welcoming video.
First time setup. Language, Dell EULA, connecting to WiFi, location, keyboard, user+password.
Creating recovery media. I opted not to do this as I had happened to read that it's highly recommended to install upgrades first, including to this tool.
Finalizing setup.
Ready to log in!
It's alive!
Not so recent 14.04 LTS image... lots of updates.

Problems in the First Batch

Unfortunately the first batch of XPS 13:s with Ubuntu are going to ship with some problems. They're easy to fix if you know how to, but it's sad that they're there to begin with in the factory image. There is no knowledge when a fixed batch will start shipping - July maybe?

First of all, installing software upgrades stops. You need to run the following command via Dash → Terminal once: sudo apt-get install -f (it suggests upgrading libc-dev-bin, libc6-dbg, libc6-dev and udev). After that you can continue running Software Updater as usual, maybe rebooting in between.

Secondly, the fixed touchpad driver is included but not enabled by default. You need to enable the only non-enabled ”Additional Driver” as seen in the picture below or instructed in Youtube.

Dialog enabling the touchpad driver.

Clarification: you can safely ignore the two paragraphs below, they're just for advanced users like me who want to play with upgraded driver stacks.

Optionally, since I'm interested in the latest graphics drivers especially in case of a brand new hardware like Intel Broadwell, I upgraded my Ubuntu to use the 14.04.2 Hardware Enablement stack (matches 14.10 hardware support): sudo apt install --install-recommends libgles2-mesa-lts-utopic libglapi-mesa-lts-utopic linux-generic-lts-utopic xserver-xorg-lts-utopic libgl1-mesa-dri-lts-utopic libegl1-mesa-drivers-lts-utopic libgl1-mesa-glx-lts-utopic:i386

Even though it's much better than a normal Ubuntu 14.10 would be since many of the Dell fixes continue to be in use, some functionality might become worse compared to the pre-installed stack. The only thing I have noticed though is the internal microphone not working anymore out-of-the-box, requiring a kernel patch as mentioned in Dell's notes. This is not a surprise since the real eventual upstream support involves switching from HDA to I2S and during 14.10 kernel work that was not nearly done. If you're excited about new drivers, I'd recommend waiting until August when the 15.04 based 14.04.3 stack is available (same package names, but 'vivid' instead of 'utopic'). [edit: I couldn't resist myself when I saw linux-generic-lts-vivid (3.19 kernel) is already in the archives. 14.04.2 + that gives me working microphone again!]


Dell XPS 13 Developer Edition with Ubuntu 14.04 LTS is an extremely capable laptop + OS combination nearing perfection, but not quite there because of the software problems in the launch pre-install image. The laptop looks great, feels like a quality product should and is very compact for the screen size.

I've moved over all my work onto it and everything so far is working smoothly in my day-to-day tasks. I'm staying at Ubuntu 14.04 LTS and using my previous LXC configuration to run the latest Ubuntu and Debian development versions. I've also done some interesting changes already like LUKS In-Place Conversion, converting the pre-installed Ubuntu into whole disk encrypted one (not recommended for the faint hearted, GRUB reconfiguration is a bit of a pain).

I look happily forward to working a few productive years with this one!

Saturday, 06 June 2015

Flashing Libreboot on an X200 with a Raspberry Pi

the_unconventional's blog » English | 15:30, Saturday, 06 June 2015

A couple of weeks ago, someone kindly donated a ThinkPad X200 for me to practice flashing coreboot on. As the X200 has an Intel GPU, it is able to run Libreboot; the 100% Free, pre-compiled downstream of the coreboot project.

Unfortunately, the initial Libreboot flash cannot be done internally, so you have to open the laptop up. It’s not very complicated though. You only have to remove the palm rest, because the flash chip is at the top of the motherboard.

I bought another SOIC clip online (a 16-pin one this time), and I wired up the six required wires to the Raspberry Pi. It’s probably best to get a numbered GPIO cable so you’ll only have to mess about with the wires on one side.

The X200's flash chip is very susceptible to interference, so you might want to make twisted pairs.

The chip’s pins are numbered counter-clockwise starting at the bottom right. So the row near the GPU consists of pins 1 to 8, and the row near the cardbus consists of pins 9 to 16.

Picture courtesy of 'snuffeluffegus'.

The pinout is like this:

2  - 3.3V    - Pi Pin #1
7  - CS#     - Pi Pin #24
8  - S0/SIO1 - Pi Pin #21
10 - GND     - Pi Pin #25
15 - S1/SIO0 - Pi Pin #19
16 - SCLK    - Pi Pin #23


Preparing the ROM

Libreboot’s ROMs are almost completely suitable to flash without intervention. The only thing you have to add is the MAC address of your ethernet controller. Libreboot’s documentation explains how this is done.

You will need to download the X200 ROMs (most likely the 8MB versions: 4MB is rare) and libreboot_util from the Libreboot download site and extract both those xz archives in a working directory. For instance, ~/Downloads/libreboot/.

To put it as simply as possible, first you choose a ROM based on your keyboard layout.

kevin@vanadium:~/Downloads/libreboot/x200_8mb$ ls

Then you copy that file to the ich9gen directory, you rename it to libreboot.rom and  you change to that directory.

cp ~/Downloads/libreboot/x200_8mb/x200_8mb_usdvorak_vesafb.rom
cd ~/Downloads/libreboot/libreboot_util/ich9deblob/x86_64/

Now look at the sticker on the bottom of your X200 and write down (or memorize) the MAC address. If you can’t find a sticker there, another one is underneath the RAM.

Create a descriptor with your MAC address by running this command:

./ich9gen --macaddress 00:1F:00:00:AA:AA
(Of course, use your own MAC address instead of this example.)

Then add that descriptor to the ROM by running this command:

dd if=ich9fdgbe_8m.bin of=libreboot.rom bs=1 count=12k conv=notrunc

Note that you have to use the 4MB version in the rare case that you have a 4MB BIOS chip.

Now put the libreboot.rom image on your Raspberry Pi.

scp libreboot.rom root@raspberry-pi.local:/root/libreboot/
(This is just an example using SFTP. Your situation may be different.)


Setting up the Raspberry Pi

You will have to update the Raspberry Pi’s kernel to the latest version (3.18) in order to get the X200′s chip to work somewhat reliably over spidev. (But it never really seems to be stable.)

sudo rpi-update

Then, download the Flashrom source code and compile it for armhf.

sudo apt-get install build-essential pciutils usbutils libpci-dev libusb-dev libftdi-dev zlib1g-dev subversion
svn co svn:// flashrom
cd flashrom

If you’re not in the mood to compile it yourself on a slow Raspberry Pi, I have precompiled armv6l Flashrom binaries here. See above for obtaining the source code.

Reading the chip took a lot of attempts before I started getting hash sum matches. In fact, I was close to giving up entirely because of the many unexplainable failed attempts.

./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -r romread1.rom
./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -r romread2.rom
./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -r romread3.rom

sha512sum romread*.rom

(All checksums must be identical.)

To be honest, I have no idea what I did exactly to get it working eventually. All of a sudden, the checksums started matching three times in a row, even though I was on another floor, using SSH, and didn’t touch or change anything.

So after three successful reads, I just crossed my fingers and flashed the ROM.

./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -w ../libreboot.rom

And I bricked the laptop…

Uh oh. Erase/write failed. Checking if anything changed.
Your flash chip is in an unknown state.

I booted it up, and everything remained black.

After crying for three hours straight, I tried flashing once more with spispeed=128, and then it worked. I got a “VERIFIED” at the end and it booted to GRUB.

./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=128 -c "MX25L6405D" -w ../libreboot.rom


Hello GRUB.

GRUB easily allowed me to load the Debian netinstaller. All that was left to do was to install an Atheros AR9280 WiFi card from my collection, and I now have a working X200 zero bytes of proprietary code on it. (Aside from the SSD controller, probably.)

Ironically, you need to have a USB drive with a UEFI-menu for GRUB to boot from it.

Afterward, I tried some more test reads (spispeed varying between 128 and 1024), and I’d still be getting checksum mismatches at least one out of five times (sometimes more). So I still feel like the Raspberry Pi is quite volatile, and I’m curious whether the BBB is just as ‘hit and miss’.

I gave this X200 to my grandfather, who now uses it as his daily work machine. (He’s an accountant.) There are no hardware issues with Libreboot, other than the undock button on the docking station not working. But I already thought of a fix for that.

It's now docked and running Ubuntu 14.04 LTS with Linux-libre.

In the end, I concluded that flashing the X200 with a Raspberry Pi is quite a chore. It’s possible, but it seems to be dependent on the weather. The failure rate is easily 80%, although you will probably get it to work eventually.

I received my BeagleBone Black yesterday, so I’m looking forward to flashing the next X200 with that one. I’ve already made some wires.

Thankfully, after flashing Libreboot the first time, subsequent updates can be done internally with flashrom -p internal or simply Libreboot’s ./flash script.

Thursday, 04 June 2015

Ubuntu Phone review by a non-geek

Seravo | 06:05, Thursday, 04 June 2015



Few weeks ago I found a pretty black box waiting on my desk at the office. There it was, the BQ Aquaris E4.5, Ubuntu edition. Now available for sale all over Europe, the world’s first Ubuntu phone had arrived to the eager hands of Seravo. (Working in an open office with a bunch of other companies dealing more or less with IT, one can now easily get attention by not just talking on the phone but about it, too.)

The Ubuntu phone has been developed for a while, and now it has found its first users and can really be reviewed in practice. Can Ubuntu handle the expectations and demands of a modern mobile user? My personal answer, after getting to know my phone and see what it can and cannot do, is yes, but not yet.

But let’s get back to the pretty black box. For a visual (and not-that-technical) person such as myself, the amount of thought put into the design of the Ubuntu phone is very pleasing. The mere layout of the packaging of the box is very nice, both to the eye and from the point of view of usability. The same goes (at least partly) for the phone and its operating system itself: the developers themselves claim that “Ubuntu Phone has been designed with obsessive attention to detail” and that “form follows function throughout”. So it is not only the box that is pretty.



Swiping through the scopes

When getting familiar with the Ubuntu phone, one can simply follow clear insctructions to get the most relevant settings in place. A nice surprise was that the system has been translated into Finnish – and to a whole bunch of other odd languages ranging from Catalan to Uyghur.

The Ubuntu phone tries to minimalize the effort of browsing through several apps, and introduces the scopes. “Ubuntu’s scopes are like individual home screens for different kinds of content, giving you access to everything from movies and music to local services and social media, without having to go through individual apps.” This is a fine idea, and works to a certain point. I myself would have though appreciated an easier way to adjust and modify my scopes, so that they would indeed serve my everyday needs. It is for instance not possible to change the location for the Today section, so my phone still thinks that I’m interested in the weather and events near Helsinki (which is not the case, as my hometown Tampere is lightyears or at least 160 kilometers away from the capital).

Overall, swiping is the thing with the Ubuntu phone. One can swipe from the left, swipe from the right, swipe from the top and the bottom and all through the night, never finding a button to push in order to get to the home screen. There are no such things as home buttons or home screens. This acquires practice until one gets familiar with it.



Designed for the enthusiasts

A friend of mine once said that in order to really succeed, ecological products must be able to compete with non-ecological ones in usability – or at times even beat them in that area. A green product that does not work, can never achieve popularity. The same though can be applied to open source products as well: as the standard is high, the philosophy itself is not enough if the end product fails to do what it should.

This thought in mind, I was happy to notice that the Ubuntu phone is not only new and exiting, but also pretty usable in everyday work. There are, though, bugs and lacks of features and some pretty relevant apps missing from the selection. For services like Facebook, Twitter, Google+ or Google Maps, Ubuntu phone uses web apps. If one is addicted to Instagram or WhatsApp, one should still wait until purchasing an Ubuntu phone. Telegram, a nice alternative for instant messaging is though available, and so is the possibility to view one’s Instagram feed. It also remains a mystery to me what benefits sigining in to Ubuntu One can bring to the user – except for updates, which are indeed longed for.

To conclude, I would state that at this point the Ubuntu phone is designed for the enthusiasts and developers, and should keep on evolving to become popular with the masses. The underlying idea of open source should of course be supported, and it is expected to see the Ubuntu phone develop in the near future. Hopefully the upcoming updates will fix the most relevant bugs and the app selection will fill the needs of an average mobile phone user.


Read more about the Ubuntu Phone.

Tuesday, 02 June 2015

Count downs: T -10 hours, -12 days, -30 days, -95 days

Mario Fux | 21:10, Tuesday, 02 June 2015

It’s already for quite some time that I wanted to write this blog post and as soon one of the fundraisers I’d like to mention is over I finally took the time to write this now:

So the first fundraiser I’d like to write about is the Make Krita faster than Photoshop Kickstarter campaign. It’s almost over and is already a success but that doesn’t mean you can’t still become a supporter of this awesome painting application. And for the case you shouldn’t have seen it there was a series of interviews with Krita users (and thus users of KDE software) you should have read at least in part.

The second crowd funding campaign I’d like to mention is about the board game Heldentaufe. It’s a bit a family thing as this campaign (and thus the board game) is mostly done by a brother-in-law of mine. He worked on this project for several years – it started as his master thesis. And I must say it looks really nice (don’t know if the French artist used Krita as well) and is “simple to learn, but difficult to master”. So if you like board games go and support it.

And the third fundraiser it’d like to talk about is one of our friends from Kolab. They plan to refactor and improve one of the most successful pieces of webmail software. And as everybody here should be aware how important email is, I hope that every reader of this blog post will go to their Indiegogo page and give at least 10$.

So some of you might ask now: and what about the -95 days? In 95 days the 6th edition of the Randa Meetings will start. And as I’m sure it will become a very successful edition again and a lot of people want to come to Randa and work there as hard as they can and we want to help them with sponsoring their travel costs we plan another fundraiser for this and other KDE sprints in general. So if you would like to help us don’t hesitate and write me an email (fux AT kde org) or ping me on IRC.

UPDATE: As the first comment mentions the Heldentaufe Kickstarter was cancelled this morning and you can read about the reason on the latest update. But I’m optimistic that there will be a second fundraiser campaign in the future and if you’re interested about it don’t hesitate to write me an email and I’ll ping you when the new campaign starts.

flattr this!

Monday, 01 June 2015

Free Software in Education Keynote at DORS/CLUC

Being Fellow #952 of FSFE » English | 15:22, Monday, 01 June 2015

I haven’t informed you about recent education news as I was busy preparing for talks about the same subject. The latest was at DORS/CLUC in Zagreb, a traditional FS event that celebrated its 22nd version! Wow, the only computing device I owned back then was a calculator when the first DORS/CLUG came to light.

I put the slides of the talk here and I hope that the video recording will be online soon as well.

Guido at DORC/CLUC

me at DORC/CLUC (by Sam Tuke)

I was happy to meet former FSFE employee Sam Tuke again who is saving the world now at Collabora and the Document Foundation. His keynote about the progress with LibreOffice was pretty impressive.

I also had the pleasure to meet Gijs Hillenius again. I wouldn’t be aware of most of the news I usually write about without him. Unfortunately, I couldn’t attend his keynote as I had to leave on the same day.

I talked to people from an edu project in Croatia. I attended their talk even though I can’t understand Croatian, but it was enough to establish a first contact and we will stay in touch.

There is not much left to say but Thank you to Svebor from HrOpen and the rest of the crew who managed an awesome event! The only downside of that trip to Zagreb was that it was so short! But I think I got out most of it as I was even able to get into the sea during my 2 hours stop over in Split and instead of waiting 10 minutes on a cab at the venue, I had a little walk through Zagreb to the main station and had a glance at this beautiful city.  :)

I was told how nice Croatians are before I went and can now confirm that this is true. I’m looking forward to the next DORS/CLUC!

flattr this!

Facebook offers to send you encrypted emails. This won’t help you.

Karsten on Free Software | 11:54, Monday, 01 June 2015

Facebook announced today that the company will let users upload their OpenPGP public keys to the site. This way, the company can encrypt the emails that it sends to its users.

When one of the world’s most-visited websites adds encryption capabilities, that’s normally a cause for applause. But on second thought, there’s very little here that makes Facebook’s users better off.

This change does nothing to protect you from Facebook’s surveillance. The site’s working principle is to maximise the amount of data it sucks in about its users. And it’s not just the site: Through its ubiquitous “Like” buttons and similar tools, Facebook follows you wherever you go on the web, and builds up a detailed profile of your behaviour.

The company then does with its users what the banks did in the years leading up to the 2008 financial crisis: Slice them into ever-finer demographics, parcel them up, and sell them to advertisers. Whether they send their emails to you encrypted, in plain text, by coach or by carrier pigeon doesn’t make any difference.

Adding encryption to the channel between you and Facebook also does very little to protect you from government surveillance. While state actors, and other people tapping your line, might not be able to read the contents of the messages, they have full access to the subject line and the metadata (who sent the message, who received it, when, and so forth). If the US government is in any way interested in what you’re doing on the site, they only need to ask. The same goes for any other government with which, in order to be allowed to operate, Facebook has cut a deal to rat out its users, such as China.

This step doesn’t even really have the benefit of getting more people to use end-to-end encryption. I’d be very surprised if anyone decided to start using GnuPG or similar tools because of this; Facebook provides no real motivation to do so.

The only benefit for users from this step is that things like password reset messages are now better protected from interception.This will somewhat reduce the risk of identity theft via Facebook, though of course it won’t prevent it from happening. Still, this may somewhat reduce disruptions to Facebook’s business. If we let the company get away with it, they might even succeed with their message of “we’re using crypto, so we’re the good guys”.

This isn’t a step to make you better off. It’s a step to make Facebook better off.




Sunday, 31 May 2015

Libreoffice Human Interface Guidelines: The second step

bb's blog | 20:31, Sunday, 31 May 2015

The Libreoffice Human Interface Guidlines (HIG) have been given a new lease on life. In this posting we introduce the impact of two primary personas on guidelines about menu bars and tool bars. Almost a month has passed and it’s time for an update. The Libreoffice UX team finished two more guidelines that are introduced [...]

Friday, 29 May 2015

How I got a Thinkpad T60p coreboot GNU Linux-libre Trisquel laptop

André on Free Software » English | 05:23, Friday, 29 May 2015


Recently something went wrong on my laptop so I was not able to be online for several days. A fast solution wasn’t there. This meant my needs changed and I decided to go for a second laptop so I could keep one as a ready back-up.

As a Fellow of the Free Software Foundation Europe, I read about the FSF Respects Your Freedom certification-program and decided a secondhand laptop with coreboot and GNU Linux-libre would be a suitable solution.

Going for a secondhand seemed for me the way to go: it is cost-effective and better for the environment. If you have a popular model then spare parts are readily available.

So when Kevin Keijzer had a refurbished Thinkpad on offer, I was happy to be informed. He engineered the best parts of two Thinkpads into one, and thanks to the Libreboot project he was able to flash the laptop using Flashrom. He installed the operating system of my choice on top of that. For me that is Trisquel, as it runs GNU Linux-libre and it is being endorsed by the FSF. The laptop would not pass the FSF’s Respects Your Freedom certification because of the 64 kB VGABIOS.


Kevin and I agreed on a price we could both live with, and I was happy to receive a high level of service. The laptop doesn’t contain any clear to see scratches, it is flashed with SeaBIOS, installed with Trisquel 7 GNOME and the programs I specified. I encourage any Free Software user to contact Kevin to experience his quality engineering and service level.


In the time I was without a working computer I freed the one of a collegue by installing Ubuntu. On his laptop I removed bloatware and replaced Office with LibreOffice.

Now that everything runs smoothly again I can sync documents and that is convenient in case of an emergency – especially if you translate for Free Software Foundation Europe.

Wednesday, 27 May 2015

Quick start using Blender for video editing - fsfe | 20:14, Wednesday, 27 May 2015

Although it is mostly known for animation, Blender includes a non-linear video editing system that is available in all the current stable versions of Debian, Ubuntu and Fedora.

Here are some screenshots showing how to start editing a video of a talk from a conference.

In this case, there are two input files:

  • A video file from a DSLR camera, including an audio stream from a microphone on the camera
  • A separate audio file with sound captured by a lapel microphone attached to the speaker's smartphone. This is a much better quality sound and we would like this to replace the sound included in the video file.

Open Blender and choose the video editing mode

Launch Blender and choose the video sequence editor from the pull down menu at the top of the window:

Now you should see all the video sequence editor controls:

Setup the properties for your project

Click the context menu under the strip editor panel and change the panel to a Properties panel:

The video file we are playing with is 720p, so it seems reasonable to use 720p for the output too. Change that here:

The input file is 25fps so we need to use exactly the same frame rate for the output, otherwise you will either observe the video going at the wrong speed or there will be a conversion that is CPU intensive and degrades the quality:

Now specify an output filename and location:

Specify the file format:

and the video codec:

and specify the bitrate (smaller bitrate means smaller file but lower quality):

Specify the AAC audio codec:

Now your basic rendering properties are set. When you want to generate the output file, come back to this panel and use the Animation button at the top.

Editing the video

Use the context menu to change the properties panel back to the strip view panel:

Add the video file:

and then right click the video strip (the lower strip) to highlight it and then add a transform strip:

Audio waveform

Right click the audio strip to highlight it and then go to the properties on the right hand side and click to show the waveform:

Rendering length

By default, Blender assumes you want to render 250 frames of output. Looking in the properties to the right of the audio or video strip you can see the actual number of frames. Put that value in the box at the bottom of the window where it says 250:

Enable AV-sync

Also at the bottom of the window is a control to enable AV-sync. If your audio and video are not in sync when you preview, you need to set this AV-sync option and also make sure you set the frame rate correctly in the properties:

Add the other sound strip

Now add the other sound file that was recorded using the lapel microphone:

Enable the waveform display for that sound strip too, this will allow you to align the sound strips precisely:

You will need to listen to the strips to make an estimate of the time difference. Use this estimate to set the "start frame" in the properties for your audio strip, it will be a negative value if the audio strip starts before the video. You can then zoom the strip panel to show about 3 to 5 seconds of sound and try to align the peaks. An easy way to do this is to look for applause at the end of the audio strips, the applause generates a large peak that is easily visible.

Once you have synced the audio, you can play the track and you should not be able to hear any echo. You can then silence the audio track from the camera by right clicking it, look in the properties to the right and change volume to 0.

Make any transforms you require

For example, to zoom in on the speaker, right click the transform strip (3rd from the bottom) and then in the panel on the right, click to enable "Uniform Scale" and then set the scale factor as required:

Next steps

There are plenty of more comprehensive tutorials, including some videos on Youtube, explaining how to do more advanced things like fading in and out or zooming and panning dynamically at different points in the video.

If the lighting is not good (faces too dark, for example), you can right click the video strip, go to the properties panel on the right hand side and click Modifiers, Add Strip Modifier and then select "Color Balance". Use the Lift, Gamma and Gain sliders to adjust the shadows, midtones and highlights respectively.

Sunday, 31 May 2015

KSysGuard: The Green Paper

bb's blog | 20:31, Sunday, 31 May 2015

This green paper summarizes user comments regarding needs and wishes for a new KSysGuard and presents a couple of mockups to initiate the discussion at the KDE forums. [Read the full article]

Tuesday, 30 June 2015

How I learned to love the NASA

Don't Panic » English Planet | 08:31, Tuesday, 30 June 2015

Ever wondered about the brilliance of the sky you’re looking at night? Well, honestly, I already liked the NASA before, especially for producing and offering royalty-free content. But this week, I encountered something that made me like the NASA even … Continue reading

Tuesday, 19 May 2015

Pushing fast forward: Roundcube Next.

freedom bits | 07:02, Tuesday, 19 May 2015

If you are a user of Roundcube, you want to contribute to If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.

Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.

Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.

Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.

Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.

TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.

From a Kolab perspective, Roundcube is the web mail component of its web application.

The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.

But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.

The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.

The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.

These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.

It just won’t solve the problem at scale.

To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.

Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.

That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.

While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.

But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.

All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.

So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.

And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.


Monday, 18 May 2015

Free and open WebRTC for the Fedora Community - fsfe | 17:48, Monday, 18 May 2015

In January 2014, we launched the service for the Debian community. An equivalent service has been in testing for the Fedora community at

Some key points about the Fedora service:

  • The web front-end is just HTML, CSS and JavaScript. PHP is only used for account creation, the actual WebRTC experience requires no server-side web framework, just a SIP proxy.
  • The web code is all available in a Github repository so people can extend it.
  • Anybody who can authenticate against the FedOAuth OpenID is able to get a test account immediately.
  • The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.

Testing it with WebRTC

Create an RTC password and then log in. Other users can call you. It is federated, so people can also call from or from

Testing it with other SIP softphones

You can use the RTC password to connect to the SIP proxy from many softphones, including Jitsi or Lumicall on Android.

Copy it

The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.

Discuss it

The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.

WebRTC opportunities expanding

Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.

Wednesday, 13 May 2015

Why and how to publish a plugin at

Seravo | 12:27, Wednesday, 13 May 2015

The first ever WordCamp was held in Finland on May 8th and 9th in Tampere. Many from our staff participated in the event and Seravo was also one of the sponsors.

On Friday Otto Kekäläinen had a talk with the title “Contributing to – Why you (and your company) should publish plugins at”. On Saturday he held workshops titled “How to publish a plugin at” and Onni Hakala held a workshop about how to develop with WordPress using Git, Composer, Vagrant and other great tools.



Below are the slides from these presentations and workshops:

<object data=";doc=2015-05-08-wordpress-contribute-why-150508083947-lva1-app6892" height="934" type="application/x-shockwave-flash" width="1140" wmode="opaque"><param name="movie" value=";doc=2015-05-08-wordpress-contribute-why-150508083947-lva1-app6892"/><param name="allowFullScreen" value="true"/></object> <object data=";doc=2015-05-09-wordpress-contribute-how-150509132344-lva1-app6892" height="934" type="application/x-shockwave-flash" width="1140" wmode="opaque"><param name="movie" value=";doc=2015-05-09-wordpress-contribute-how-150509132344-lva1-app6892"/><param name="allowFullScreen" value="true"/></object>


WordCamp Workshop on modern dev tools by Onni Hakala (in Finnish)


See also our recap on WordCamp Finland 2015 in Finnish:




(Photos by Jaana Björklund)


Last chance to register for the Randa Meetings 2015

Mario Fux | 07:07, Wednesday, 13 May 2015

Konqi - the KDE mascot in the Randa Meetings edition

If you are interested in participating in this year’s Randa Meetings and want to have a chance to be financially supported to travel to Randa then the last 24 hours of the registration period just began.

So it’s now or never – or maybe next year.

flattr this!

Tuesday, 12 May 2015

Migrating the Mailman Wiki from Confluence to MoinMoin

Paul Boddie's Free Software-related blog » English | 19:39, Tuesday, 12 May 2015

Having recently seen an article about the closure of a project featuring that project’s usage of proprietary tools from Atlassian – specifically JIRA and Confluence – I thought I would share my own experiences from the migration of another project’s wiki site that had been using Confluence as a collaborative publishing tool.

Quite some time ago now, a call for volunteers was posted to the FSF blog, asking for people familiar with Python to help out with a migration of the Mailman Wiki from Confluence to MoinMoin. Subsequently, Barry Warsaw followed up on the developers’ mailing list for Mailman with a similar message. Unlike the project at the start of this article, GNU Mailman was (and remains) a vibrant Free Software project, but a degree of dissatisfaction with Confluence, combined with the realisation that such a project should be using, benefiting from, and contributing to Free Software tools, meant that such a migration was seen as highly desirable, if not essential.

Up and Away

Initially, things started off rather energetically, and Bradley Dean initiated the process of fact-finding around Confluence and the Mailman project’s usage of it. But within a few months, things apparently became noticeably quieter. My own involvement probably came about through seeing the ConfluenceConverter page on the MoinMoin Wiki, looking at the development efforts, and seeing if I couldn’t nudge the project along by pitching in with notes about representing Confluence markup format features in the somewhat more conventional MoinMoin wiki syntax. Indeed, it appears that my first contribution to this work occurred as early as late May 2011, but I was more or less content to let the project participants get on with their efforts to understand how Confluence represents its data, how Confluence exposes resources on a wiki, and so on.

But after a while, it occurred to me that the volunteers probably had other things to do and that progress had largely stalled. Although there wasn’t very much code available to perform concrete migration-related tasks, Bradley had gained some solid experience with the XML format employed by exported Confluence data, and such experience when combined with my own experiences dealing with very large XML files in my day job suggested an approach that had worked rather well with such large files: performing an extraction of the essential information, including identifiers and references that communicate the actual structure of the information, as opposed to the hierarchical structure of the XML data itself. With the data available in a more concise and flexible form, it can then be processed in a more convenient fashion, and within a few weeks I had something ready to play with.

With a day job and other commitments, it isn’t usually possible to prioritise volunteer projects like this, and I soon discovered that some other factors were involved: technological progress, and the tendency for proprietary software and services to be upgraded. What had initially involved the conversion of textual content from one markup format to another now seemed to involve the conversion from two rather different markup formats. All the effort documenting the original Confluence format now seemed to be almost peripheral if not superfluous: any current content on the Mailman Wiki would now be in a completely different format. And volunteer energy seemed to have run out.

A Revival

Time passed. And then the Mailman developers noticed that the Confluence upgrade had made the wiki situation even less bearable (as indeed other Confluence users had noticed and complained about), and that the benefits of such a solution were being outweighed by the inconveniences of the platform. And it was at this point that I realised that it was worthwhile continuing the migration effort: it is bad enough that people feel constrained by a proprietary platform over which they have little control, but it is even worse when it appears that they will have to abandon their content and start over with little or no benefit from all the hard work they have invested in creating and maintaining that content in the first place.

And with that, I started the long process of trying to support not only both markup formats, but also all the features likely to have been used by the Mailman project and those using its wiki. Some might claim that Confluence is “powerful” by supporting a multitude of seemingly exotic features (page relationships, comments, “spaces”, blogs, as well as various kinds of extensions), but many of these features are rarely used or never actually used at all. Meanwhile, as many migration projects can attest, if one inadvertently omits some minor feature that someone regards as essential, one risks never hearing the end of it, especially if the affected users have been soaking up the propaganda from their favourite proprietary vendor (which was thankfully never a factor in this particular situation).

Despite the “long tail” of feature support, we were able to end 2012 with some kind of overview of the scope of the remaining work. And once again I was able to persuade the concerned parties that we should focus on MoinMoin 1.x not 2.x, which has proved to be the correct decision given the still-ongoing status of the latter even now in 2015. Of course, I didn’t at that point anticipate how much longer the project would take…


Over the next few months, I found time to do more work and to keep the Mailman development community informed again and again, which is a seemingly minor aspect of such efforts but is essential to reassure people that things really are happening: the Mailman community had, in fact, forgotten about the separate mailing list for this project long before activity on it had subsided. One benefit of this was to get feedback on how things were looking as each iteration of the converted content was made available, and with something concrete to look at, people tend to remember things that matter to them that they wouldn’t otherwise think of in any abstract discussion about the content.

In such processes, other things tend to emerge that initially aren’t priorities but which have to be dealt with eventually. One of the stated objectives was to have a full history, meaning that all the edits made to the original content would need to be preserved, and for an authentic record, these edits would need to preserve both timestamp and author information. This introduced complications around the import of converted content – it being no longer sufficient to “replay” edits and have them assume the timestamp of the moment they were added to the new wiki – as well as the migration and management of user profiles. Particularly this latter area posed a problem: the exported data from Confluence only contained page (and related) content, not user profile information.

Now, one might not have expected user details to be exportable anyway due to potential security issues with people having sufficient privileges to request a data dump directly from Confluence and then to be able to obtain potentially sensitive information about other users, but this presented another challenge where the migration of an entire site is concerned. On this matter, a very pragmatic approach was taken: any user profile pages (of which there were thankfully very few) were retrieved directly over the Web from the existing site; the existence of user profiles was deduced from the author metadata present in the actual exported wiki content. Since we would be asking existing users to re-enable their accounts on the new wiki once it became active, and since we would be avoiding the migration of spammer accounts, this approach seemed to be a reasonable compromise between convenience and completeness.

Getting There

By November 2013, the end was in sight, with coverage of various “actions” supported by Confluence also supported in the migrated wiki. Such actions are a good example of how things that are on the edges of a migration can demand significant amounts of time. For instance, Confluence supports a PDF export action, and although one might suggest that people just print a page to file from their browser, choosing PDF as the output format, there are reasonable arguments to be made that a direct export might also be desirable. Thus, after a brief survey of existing options for MoinMoin, I decided it would be useful to provide one myself. The conversion of Confluence content had also necessitated the use of more expressive table syntax. Had I not been sufficiently interested in implementing improved table facilities in MoinMoin prior to this work, I would have needed to invest quite a bit of effort in this seemingly peripheral activity.

Again, time passed. Much of the progress occurred off-list at this point. In fact, a degree of confusion, miscommunication and elements of other factors conspired to delay the availability of the infrastructure on which the new wiki would be deployed. Already in October 2013 there had been agreement about hosting within the infrastructure, but the matter seemed to stall despite Barry Warsaw trying to push it along in February and April 2014. Eventually, after complaining from me on the PSF members’ mailing list at the end of May, some motion occurred on the matter and in July the task of provisioning the necessary resources began.

After returning from a long vacation in August, the task of performing the final migration and actually deploying the content could finally begin. Here, I was able to rely on expert help from Mark Sapiro who not only checked the results of the migration thoroughly, but also configured various aspects of the mail system functionality (one of the benefits of participating in a mail-oriented project, I guess), and even enhanced my code to provide features that I had overlooked. By September, we were already playing with the migrated content and discussing how the site would be administered and protected from spam and vandalism. By October, Barry was already confident enough to pre-announce the migrated site!

At Long Last

Alas, things stalled again for a while, perhaps due to other commitments of some of the volunteers needed to make the final transition occur, but in January the new Mailman Wiki was finally announced. But things didn’t stop there. One long-standing volunteer, Jim Tittsler, decided that the visual theme of the new wiki would be improved if it were made to match the other Mailman Web resources, and so he went and figured out how to make a MoinMoin theme to do the job! The new wiki just wouldn’t look as good, despite all the migrated content and the familiarity of MoinMoin, if it weren’t for the special theme that Jim put together.

The all-new Mailman Wiki with Jim Tittsler's theme

The all-new Mailman Wiki with Jim Tittsler's theme

There have been a few things to deal with after deploying the new wiki. Spam and vandalism have not been a problem because we have implemented a very strict editing policy where people have to request editing access. However, this does not prevent people from registering accounts, even if they never get to use them to do anything. To deal with this, we enabled textcha support for new account registrations, and we also enabled e-mail verification of new accounts. As a result, the considerable volume of new user profiles that were being created (potentially hundreds every hour) has been more or less eliminated.

It has to be said that throughout the process, once it got started in earnest, the Mailman development community has been fantastic, with constructive feedback and encouragement throughout. I have had disappointing things to say about the experience of being a volunteer with regard to certain projects and initiatives, but the Mailman project is not that kind of project. Within the limits of their powers, the Mailman custodians have done their best to enable this work and to see it through to the end.

Some Lessons

I am sure that offers of “for free” usage of certain proprietary tools and services are made in a genuinely generous way by companies like Atlassian who presumably feel that they are helping to make Free Software developers more productive. And I can only say that those interactions I experienced with Contegix, who were responsible for hosting the Confluence instance through which the old Mailman Wiki was deployed, were both constructive and polite. Nevertheless, proprietary solutions are ultimately disempowering: they take away the control over the working environment that users and developers need to have; they direct improvement efforts towards themselves and away from Free Software solutions; they also serve as a means of dissuading people from adopting competing Free Software products by giving an indication that only they can meet the rigorous demands of the activity concerned.

I saw a position in the Norwegian public sector not so long ago for someone who would manage and enhance a Confluence installation. While it is not for me to dictate the tools people choose to do their work, it seems to me that such effort would be better spent enhancing Free Software products and infrastructure instead of remedying the deficiencies of a tool over which the customer ultimately has no control, to which the customer is bound, and where the expertise being cultivated is relevant only to a single product for as long as that product is kept viable by its vendor. Such strategic mistakes occur all too frequently in the Norwegian public sector, with its infatuation with proprietary products and services, but those of us not constrained by such habits can make better choices when choosing tools for our own endeavours.

I encourage everyone to support Free Software tools when choosing solutions for your projects. It may be the case that such tools may not at first offer precisely the features you might be looking for, and you might be tempted to accept an offer of a “for free” product or to use a no-cost “cloud” service, and such things may appear to offer an easier path when you might otherwise be confronted with a choice of hosting solutions and deployment issues. But there are whole communities out there who can offer advice and will support their Free Software project, and there are Free Software organisations who will help you deploy your choice of tools, perhaps even having it ready to use as part of their existing infrastructure.

In the end, by embracing Free Software, you get the control you need over your content in order to manage it sustainably. Surely that is better than having some random company in charge, with the ever-present risk of them one day deciding to discontinue their service and/or, with barely enough notice, discard your data.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Vienna » English  Fellowship Interviews  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  I love it here » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Jonas Öberg  Karsten on Free Software  Leena Simon » » english  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Paul Boddie's Free Software-related blog » English  Pressreview  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Sam Tuke » Free Software  Sam Tuke's blog  Seravo  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes» Free Software  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos  pb's blog » en  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog