Planet Fellowship (en)

Friday, 31 July 2015

Interview with Fellow Neil McGovern

FSFE Fellowship Interviews | 16:05, Friday, 31 July 2015

Neil McGovern

Neil McGovern is a Fellow of the FSFE from the United Kingdom and was recently elected as Debian Project Leader, starting his term of office in April. He has previously participated in local government and has served on the board of the Open Rights Group: a digital rights organisation operating in the UK.

Paul Boddie: Congratulations on your recent election as Debian Project Leader (DPL)! Looking at your election platform, you have been engaged in a number of different activities for quite some time. How do you manage to have time for everything you want to do? Is your employer supportive of your new role or does your free time bear most of the burden?

Neil McGovern: I’d say it’s a mix of both. My employer, Collabora is hugely supportive of the role, Debian and Free Software in general. However, being DPL isn’t just a 9 to 5 job – the timezones that all our contributors work in mean that there’s always work to be done.

Paul Boddie: You appear to be fortunate enough to work for an employer that promotes Free Software solutions. For many people interested in Free Software who have to squeeze that interest in around their job, that sounds like the perfect opportunity to combine your own interests with your professional objectives. But what started you off in the direction of Free Software and your current position? And, once on the path, did you deliberately seek out Free Software opportunities, or was it just a happy accident that you ended up where you are today?

Neil McGovern: My first exposure to free software was from a friend at secondary school who started selling CDs of Linux distributions. He initially introduced me to the concept. Before that, I’d mostly used Mac OS, in the olden days before OS X came along. When I went to university to study computer science, I joined University of Sheffield IT Committee. At the time, there wasn’t any facilities offered for students to host web pages. This was originally running Mandrake. In my second year, I moved in with a housemate, who was a Debian Developer, and I started packaging a client for LiveJournal called Drivel.

Since then, I guess it’s less that I’ve less sought out opportunities, but it’s more that the opportunities out there have been very much geared towards people who understand Free Software well, and can help with them. My current job however is very much more than just using and developing Free Software – it’s about enabling companies out there to use Free Software successfully, both for the immediate gain they can get, but also making sure that they understand the benefits of contributing back. A pretty ideal job for a Free Software enthusiast!

Paul Boddie: Your DPL platform states that you intend to “support and enable” the volunteers that get the work of Debian done. One of the challenges of volunteer-run organisations is that of keeping people happy and productive, knowing that they can walk away at any time. What lessons from your history of involvement with Debian and in other areas can you share with us about keeping volunteers happy, productive and, crucially, on board?

Neil McGovern: I think the key issue is about communications. You need to make sure that you actively listen to people, and understand their view point. Given the globally distributed nature of Debian, it’s easy for people to have disagreements – remembering that another human is at the other end of an email address isn’t the easiest thing in the world. Face to face meetings and conferences are essential for countering this – every year when I go to DebConf, I come back reinvigorated to continue working on Debian. :)

Paul Boddie: Especially in recent years, there has been a lot of discussion about Free Software solutions and platforms losing out to proprietary rivals, with special attention given to things like smartphones, “app” marketplaces, and so on. That some of these proprietary offerings build on Free Software makes the situation perhaps even more unpalatable. How do you see Free Software in general, and Debian in particular, having more of a visible role to play in delivering these solutions and services all the way to the end-user and perhaps getting more of the credit?

Neil McGovern: The key issue is trust – when Debian distributes a package, you know that it’s met various quality and stability standards. There’s a risk in moving to an entire container based model that people will simply download random applications from the internet. If a security problem is found in a shared library in Debian, we can fix it once. If that library is embedded in hundreds of different ‘apps’, then they’ll all need fixing independently. This would certainly be a challenge to overcome. Mind you, in our latest release we had over 45,000 binary packages, so I don’t think that there’s a lack of choice of software in Debian!

Debian logo

Paul Boddie: I see you were involved in local government for a while. Would you say that interacting with the Free Software and Debian communities led you to explore a broader political role, or did your political beliefs lead you instead to Debian and Free Software?

Neil McGovern: Well, secretly, the real reason I got involved in politics was that I had had quite a few beers in a local pub with some friends for a 30th birthday, and one of them asked if I wanted to get involved. The next day I woke up with a hangover, and a knocking on the door with said friend holding a bundle of leaflets for me to deliver. :) I don’t think it’s really that one led to another, but more that it stems from my desire to try and help people; be it through representing constituents, or helping create software that everyone can use for free.

Paul Boddie: One of the articles on your blog deals with an awkward situation in local government where you felt you had to support the renewal of proprietary software licences, specifically Microsoft Office. Given your interests in Free Software and open standards, this must have been a bittersweet moment, knowing that the local bureaucracy had fallen into the trap of macros and customisations that make migration to open standards and interoperable products very challenging. Was this a case of knowing which battles are the ones that are worth fighting? And how do you see organisations escaping from these vicious cycles of vendor lock-in? Do you perceive an appetite in public institutions for embracing interoperability and choice or do people just accept the “treadmill” of upgrades as inevitable, perhaps as not even perceiving it as a problem at all?

Neil McGovern: It wasn’t the most pleasant decision I’ve had to make, but it was a case of weighing up what was available and what wasn’t. Since then, ODF-supporting programs have improved greatly, and there’s even commercial support for these. (Disclosure: my company is one of these now.) Additionally, the UK Government announced that it would use ODF as the standard for distributing documents, which is a big win, so I think there is a change that’s happening – Free Software is something that is now being recognised as a real force compared to 5 years ago.

Paul Boddie: A lot of attention has been focused on the next generation of software developers, particularly in the UK education sector, with initiatives like the Raspberry Pi and BBC “Micro Bit” as well as a mainstream awareness of “apps” and “app” development. Do you think there might be a risk of people becoming tempted by arenas of activity where the lessons of Free Software are not being taught or learned, where the vendor’s products are the focus, and where people no longer experience or understand a sustainable and independent community like Debian? Is computing at risk of being dragged back to an updated version of the classic 1980s consumer-producer relationship? Or worse: a rehashed version of something like the “walled garden” networked computing visions of Apple and Microsoft from the early 1990s where the vendor even sets the terms of participation and interaction?

Neil McGovern: There’s certainly a renewed focus on computing education in the UK, but that’s mostly because it’s been so poor for the past 15 or so years! We’ve been teaching students how to use a spreadsheet, or a word processor in the guise of ICT, but no efforts have gone in to actual computing. Thankfully, this is actually changing. I do have the inner suspicions that the focus on “apps” is a civil servant somewhere thinking “Kids like apps, right? And they can sell them and everything, so that’s good. Apps! Teach them to make apps!”

Paul Boddie: Finally, noticing your connections with Cambridge, and having an appreciation myself for the role of the Cambridge area and its university in founding technology companies directly or indirectly, I cannot help but ask about the local attitudes to things like Free Software, open standards, and notions of openness in general. Once upon a time, there seemed to be a degree of remorse that the rather proprietary microcomputing-era companies had failed to dominate in the marketplace, and this led to companies like ARM that have done quite well licensing their technologies, albeit in a restrictive way. Do you sense that the Cambridge technology scene has been (or become) more accepting of notions of openness and freedoms around software and technology? Or are there still prominent local opinions about the need to make money by routing around such freedoms? How do you view your involvement in Debian and the Open Rights Group as a way of bringing about changes in attitudes and in wider society towards such issues?

Neil McGovern: Nothing really opened my eyes to the importance of Debian until I turned up to my running group, and got approximately 6 people offer to buy me a pint as they heard I’d been elected DPL. I’m not sure that would have happened in many other cities. I do think that it is a reflection on the main-streaming of Free Software within large companies. We’re now seeing that not only is Free Software being accepted, but experience with it is seen as an advantage. This is perhaps best highlighted by Microsoft throwing a birthday party for the release of Debian 8, a sight I never thought I’d see.

Paul Boddie is a Free Software developer currently residing in Norway, cultivating interests in open hardware, photography and retrocomputing. He joined the FSFE in 2008 and occasionally publishes his own opinions on his blog.

Thursday, 30 July 2015

MOOC about Free Software

Being Fellow #952 of FSFE » English | 23:22, Thursday, 30 July 2015

It’s been a few months already since Vitaly Repin pointed us to a project of his: a MOOC about Free Software and I still haven’t mentioned it here.

As they realized that most people are not aware about the complexities of the digital world we live in, the idea of creating a MOOC (Massive Open Online Course) dedicated to these important questions appeared. Richard Stallman liked it and suggested to re-use parts of his video tapes in order to create the course. It has been done but some of the parts had to be redone. They made recordings in Helsinki for this as well as the intro video to the course.

I haven’t seen much of it yet, but I can already tell that they put a lot of effort into it to get it done.

So, on May 4, 2015 (May the Forth be with you!) the course was released at Eliademy. The course contains videos, quizzes and forum discussions. Its contents are released under CC BY-NC-SA.

There’s already been some interesting discussion on FSFE’s mailing list about platforms like Eliademy. However, regardless of the platform where it is published, the videos are released under CC BY-ND license and the material is also available on Vitaly’s personal website.

I’m personally more concerned about the ND clause in the videos and the NC clause for the content than any proprietary platform that may use the content.

Vitaly plans to publish it in the Common Cartridge format any time soon now after the first course iteration has been completed. This format is supported by different LMS (e.g., Moodle) and it will allow the local teachers to use the course materials in their educational activities.

He would be more than happy if anybody decides to use the videos they made in other platforms and can spread the word about this course. And there is much more what you can do: provide any feedback on the content, provide translations, improve it, add quizzes, create subtitles, etc.

flattr this!

Free Real-time Communications (RTC) at DebConf15, Heidelberg - fsfe | 09:23, Thursday, 30 July 2015

The DebConf team have just published the first list of events scheduled for DebConf15 in Heidelberg, Germany, from 15 - 22 August 2015.

There are two specific events related to free real-time communications and a wide range of other events related to more general topics of encryption and privacy.

15 August, 17:00, Free Communications with Free Software (as part of the DebConf open weekend)

The first weekend of DebConf15 is an open weekend, it is aimed at a wider audience than the traditional DebConf agenda. The open weekend includes some keynote speakers, a job fair and various other events on the first Saturday and Sunday.

The RTC talk will look at what solutions exist for free and autonomous voice and video communications using free software and open standards such as SIP, XMPP and WebRTC as well as some of the alternative peer-to-peer communications technologies that are emerging. The talk will also look at the pervasive nature of communications software and why success in free RTC is so vital to the health of the free software ecosystem at large.

17 August, 17:00, Challenges and Opportunities for free real-time communications

This will be a more interactive session people are invited to come and talk about their experiences and the problems they have faced deploying RTC solutions for professional or personal use. We will try to look at some RTC/VoIP troubleshooting techniques as well as more high-level strategies for improving the situation.

Try the Debian and Fedora RTC portals

Have you registered for It can successfully make federated SIP calls with users of other domains, including Fedora community members trying

You can use for regular SIP (with clients like Empathy, Jitsi or Lumicall) or WebRTC.

Can't get to DebConf15?

If you can't get to Heidelberg, you can watch the events on the live streaming service and ask questions over IRC.

To find out more about deploying RTC, please see the RTC Quick Start Guide.

Did you know?

Don't confuse Heidelberg, Germany with Heidelberg in Melbourne, Australia. Heidelberg down under was the site of the athlete's village for the 1956 Olympic Games.

Saturday, 25 July 2015

Announcing WikiFM

Riccardo (ruphy) Iaconelli - blog | 13:45, Saturday, 25 July 2015

Announcing WikiFM!

Earlier today I gave a talk at Akademy 2015 about WikiFM. Videos of the talk should shortly become available. Based on the feedback that I have received during and after the talk, I have written a short resume of the points which raised more interest. They are aimed at the general KDE developer community, who doesn’t seem completely aware of the project and its scope.

You can find my slides here (without some SVG images).

What is WikiFM?

The WikiFM Logo

WikiFM is a KDE project which aims to bring free and open knowledge to the world, in the form of textbooks and course notes. It aims to train students, researchers and continuous learner, with manuals and content ranging from basic calculus to “Designing with QML”. We want to revolutionize the way higher education is created and delivered.

What does it offer more than $randomproject?

The union of these three key elements: students, collaboration in the open and technology. This has proven to be invaluable to create massive and high quality content.
All other projects usually feature just two of these elements, or concentrate on other material (e.g. video).

Additionally, we have virtual machines instantiatable on the fly on which you can start to develop immediately: check out (for logged-in users). By opening that link we istantiate a machine in the blink of an eye, with all the software you need already pre-installed. We support compositing and OpenGL. Your home directory is persistent among reboots and you always get a friendly hacking environment. Try it out yourself to properly appreciate it. ;-)

Is it already used somewhere? Do you have some success stories?

The project started in Italy for personal usage. In spite of this, in just a few months we got national visibility and thousands of extemely high quality pages written. Students from other universities started to use the website and in spite of a content not planned for dissemination we get around 200 unique users/day.

In addition to this, the High Energy Physics Software Foundation (a scientifical group created among key people in istitutions such as CERN, Fermilab, Princeton University, Stanford Linear Accelerator, …) has decided to use WikiFM for their official training.

Moreover, we have been invited at CERN, Fermilab, and in the universities of Santiago (Chile) for delivering seminars about the project.

How can this help my KDE $existing_application if I am not a student?

This fits in the idea of the Evolving KDE project that started in this year’s Akademy.

Hosting developer documentation, together with a pre-built developer environment which can let library users and students test your techology and step up in the hacking within a few seconds is an invaluable feature. It is possible to demonstrate features or provide complicated tutorial showcases while giving the option of trying out the code immediately, without having to perform complicated procedures or waiting for big downloads to finish.

For existing developers it also provides a clean development environment, where testing of new application can happen without hassles.

Want to know more?

This is meant to be a brief list just to give you a taste of what we are doing.

I am at Akademy 2015 until the 29th. A strong encouragement: please come and speak to me in person! :-)

I will be happy to answer any questions and eventually re-show you a shortened version of my talk.
Or, if you prefer, we are having a BoF on Monday at 11:30, in room 2.0a.

Thursday, 23 July 2015

Unpaid work training Google's spam filters - fsfe | 08:49, Thursday, 23 July 2015

This week, there has been increased discussion about the pain of spam filtering by large companies, especially Google.

It started with Google's announcement that they are offering a service for email senders to know if their messages are wrongly classified as spam. Two particular things caught my attention: the statement that less than 0.05% of genuine email goes to the spam folder by mistake and the statement that this new tool to understand misclassification is only available to "help qualified high-volume senders".

From there, discussion has proceeded with Linus Torvalds blogging about his own experience of Google misclassifying patches from Linux contributors as spam and that has been widely reported in places like Slashdot and The Register.

Personally, I've observed much the same thing from the other perspective. While Torvalds complains that he isn't receiving email, I've observed that my own emails are not always received when the recipient is a Gmail address.

It seems that Google expects their users work a little bit every day going through every message in the spam folder and explicitly clicking the "Not Spam" button:

so that Google can improve their proprietary algorithms for classifying mail. If you just read or reply to a message in the folder without clicking the button, or if you don't do this for every message, including mailing list posts and other trivial notifications that are not actually spam, more important messages from the same senders will also continue to be misclassified.

If you are not willing to volunteer your time to do this, or if you are simply one of those people who has better things to do, Google's Gmail service is going to have a corrosive effect on your relationships.

A few months ago, we visited Australia and I sent emails to many people who I wanted to catch up with, including invitations to a family event. Some people received the emails in their inboxes yet other people didn't see them because the systems at Google (and other companies, notably Hotmail) put them in a spam folder. The rate at which this appeared to happen was definitely higher than the 0.05% quoted in the Google article above. Maybe the Google spam filters noticed that I haven't sent email to some members of the extended family for a long time and this triggered the spam algorithm? Yet it was at that very moment that we were visiting Australia that email needs to work reliably with that type of contact as we don't fly out there every year.

A little bit earlier in the year, I was corresponding with a few students who were applying for Google Summer of Code. Some of them also observed the same thing, they sent me an email and didn't receive my response until they were looking in their spam folder a few days later. Last year I know a GSoC mentor who lost track of a student for over a week because of Google silently discarding chat messages, so it appears Google has not just shot themselves in the foot, they managed to shoot their foot twice.

What is remarkable is that in both cases, the email problems and the XMPP problems, Google doesn't send any error back to the sender so that they know their message didn't get through. Instead, it is silently discarded or left in a spam folder. This is the most corrosive form of communication problem as more time can pass before anybody realizes that something went wrong. After it happens a few times, people lose a lot of confidence in the technology itself and try other means of communication which may be more expensive, more synchronous and time intensive or less private.

When I discussed these issues with friends, some people replied by telling me I should send them things through Facebook or WhatsApp, but each of those services has a higher privacy cost and there are also many other people who don't use either of those services. This tends to fragment communications even more as people who use Facebook end up communicating with other people who use Facebook and excluding all the people who don't have time for Facebook. On top of that, it creates more tedious effort going to three or four different places to check for messages.

Despite all of this, the suggestion that Google's only response is to build a service to "help qualified high-volume senders" get their messages through leaves me feeling that things will get worse before they start to get better. There is no mention in the Google announcement about what they will offer to help the average person eliminate these problems, other than to stop using Gmail or spend unpaid time meticulously training the Google spam filter and hoping everybody else does the same thing.

Some more observations on the issue

Many spam filtering programs used in corporate networks, such as SpamAssassin, add headers to each email to suggest why it was classified as spam. Google's systems don't appear to give any such feedback to their users or message senders though, just a very basic set of recommendations for running a mail server.

Many chat protocols work with an explicit opt-in. Before you can exchange messages with somebody, you must add each other to your buddy lists. Once you do this, virtually all messages get through without filtering. Could this concept be adapted to email, maybe giving users a summary of messages from people they don't have in their contact list and asking them to explicitly accept or reject each contact?

If a message spends more than a week in the spam folder and Google detects that the user isn't ever looking in the spam folder, should Google send a bounce message back to the sender to indicate that Google refused to deliver it to the inbox?

I've personally heard that misclassification occurs with mailing list posts as well as private messages.

Recording live events like a pro (part 1: audio) - fsfe | 07:14, Thursday, 23 July 2015

Whether it is a technical talk at a conference, a political rally or a budget-conscious wedding, many people now have most of the technology they need to record it and post-process the recording themselves.

For most events, audio is an essential part of the recording. There are exceptions: if you take many short clips from a wedding and mix them together you could leave out the audio and just dub the couple's favourite song over it all. For a video of a conference presentation, though, the the speaker's voice is essential.

These days, it is relatively easy to get extremely high quality audio using a lapel microphone attached to a smartphone. Lets have a closer look at the details.

Using a lavalier / lapel microphone

Full wireless microphone kits with microphone, transmitter and receiver are usually $US500 or more.

The lavalier / lapel microphone by itself, however, is relatively cheap, under $US100.

The lapel microphone is usually an omnidirectional microphone that will pick up the voices of everybody within a couple of meters of the person wearing it. It is useful for a speaker at an event, some types of interviews where the participants are at a table together and it may be suitable for a wedding, although you may want to remember to remove it from clothing during the photos.

There are two key features you need when using such a microphone with a smartphone:

  • TRRS connector (this is the type of socket most phones and many laptops have today)
  • Microphone impedance should be at least 1kΩ (that is one kilo Ohm) or the phone may not recognize when it is connected

Many leading microphone vendors have released lapel mics with these two features aimed specifically at smartphone users. I have personally been testing the Rode smartLav+

Choice of phone

There are almost 10,000 varieties of smartphone just running Android, as well as iPhones, Blackberries and others. It is not practical for most people to test them all and compare audio recording quality.

It is probably best to test the phone you have and ask some friends if you can make test recordings with their phones too for comparison. You may not hear any difference but if one of the phones has a poor recording quality you will hopefully notice that and exclude it from further consideration.

A particularly important issue is being able to disable AGC in the phone. Android has a standard API for disabling AGC but not all phones or Android variations respect this instruction.

I have personally had positive experiences recording audio with a Samsung Galaxy Note III.

Choice of recording app

Most Android distributions have at least one pre-installed sound recording app. Look more closely and you will find not all apps are the same. For example, some of the apps have aggressive compression settings that compromise recording quality. Others don't work when you turn off the screen of your phone and put it in your pocket. I've even tried a few that were crashing intermittently.

The app I found most successful so far has been Diktofon, which is available on both F-Droid and Google Play. Diktofon has been designed not just for recording, but it also has some specific features for transcribing audio (currently only supporting Estonian) and organizing and indexing the text. I haven't used those features myself but they don't appear to cause any inconvenience for people who simply want to use it as a stable recording app.

As the app is completely free software, you can modify the source code if necessary. I recently contributed patches enabling 48kHz recording and disabling AGC. At the moment, the version with these fixes has just been released and appears in F-Droid but not yet uploaded to Google Play. The fixes are in version 0.9.83 and you need to go into the settings to make sure AGC is disabled and set the 48kHz sample rate.

Whatever app you choose, the following settings are recommended:

  • 16 bit or greater sample size
  • 48kHz sample rate
  • Disable AGC
  • WAV file format

Whatever app you choose, test it thoroughly with your phone and microphone. Make sure it works even when you turn off the screen and put it in your pocket while wearing the lapel mic for an hour. Observe the battery usage.


Now lets say you are recording a wedding and the groom has that smartphone in his pocket and the mic on his collar somewhere. What is the probability that some telemarketer calls just as the couple are exchanging vows? What is the impact on the recording?

Maybe some apps will automatically put the phone in silent mode when recording. More likely, you need to remember this yourself. These are things that are well worth testing though.

Also keep in mind the need to have sufficient storage space and to check whether the app you use is writing to your SD card or internal memory. The battery is another consideration.

In a large event where smartphones are being used instead of wireless microphones, possibly for many talks in parallel, install a monitoring app like Ganglia on the phones to detect and alert if any phone has weak wifi signal, low battery or a lack of memory.

Live broadcasts and streaming

Some time ago I tested RTP multicasting from Lumicall on Android. This type of app would enable a complete wireless microphone setup with live streaming to the internet at a fraction of the cost of a traditional wireless microphone kit. This type of live broadcast could also be done with WebRTC on the Firefox app.


If you research the topic thoroughly and spend some time practicing and testing your equipment, you can make great audio recordings with a smartphone and an inexpensive lapel microphone.

In subsequent blogs, I'll look at tips for recording video and doing post-production with free software.

Monday, 20 July 2015

RTC status on Debian, Ubuntu and Fedora - fsfe | 14:04, Monday, 20 July 2015

Zoltan (Zoltanh721) recently blogged about WebRTC for the Fedora community and Fedora desktop. has been running for a while now and this has given many people a chance to get a taste of regular SIP and WebRTC-based SIP. As suggested in Zoltan's blog, it has convenient integration with Fedora SSO and as the source code is available, people are welcome to see how it was built and use it for other projects.

Issues with Chrome/Chromium on Linux

If you tried any of, or using Chrome/Chromium on Linux, you may have found that the call appears to be connected but there is no media. This is a bug and the Chromium developers are on to it. You can work around this by trying an older version of Chromium (it still works with v37 from Debian wheezy) or Firefox/Iceweasel.

WebRTC is not everything

WebRTC offers many great possibilities for people to quickly build and deploy RTC services to a large user base, especially when using components like JSCommunicator or the DruCall WebRTC plugin for Drupal.

However, it is not a silver bullet. For example, there remain concerns about how to receive incoming calls. How do you know which browser tab is ringing when you have many tabs open at once? This may require greater browser/desktop integration and that has security implications for JavaScript. Whether users on battery-powered devices can really leave JavaScript running for extended periods of time waiting for incoming calls is another issue, especially when you consider that many web sites contain some JavaScript that is less than efficient.

Native applications and mobile apps like Lumicall continue to offer the most optimized solution for each platform although WebRTC currently offers the most convenient way for people to place a Call me link on their web site or portal.

Deploy it yourself

The RTC Quick Start Guide offers step-by-step instructions and a thorough discussion of the architecture for people to start deploying RTC and WebRTC on their own servers using standard packages on many of the most popular Linux distributions, including Debian, Ubuntu, RHEL, CentOS and Fedora.

My interview for the keynote at Akademy published

I LOVE IT HERE » English | 07:18, Monday, 20 July 2015

I am invited to give a keynote at KDE’s Akademy on Saturday 25 July. In the preparation for the conference Devaja Shah interviewed me, and his questions made me look up some things in my old mail archives from the early 2000s.

The interview covers questions about my first GNU/Linux distribution, why I studied politics and management, how I got involved in FSFE, how Free Software is linked to the progress of society, my involvement in wilderness first aid seminars, as well as my favourite music. (Thanks to Victorhck who translated the interview into Spanish and also added corresponding videos.)

I am looking forward to interesting discussions with KDE contributors and the local organisers from GPUL during the weekend.

Wednesday, 15 July 2015

We don’t use Free Software, we want something that just works!

Being Fellow #952 of FSFE » English | 09:50, Wednesday, 15 July 2015

Joinup reports: Using Free Software in school greatly reduces the time needed to troubleshoot PCs

After migrating to Free Software in the Augustinian College of León, Spain: “For teachers and staff, the amount of technical issues decreased by 63 per cent and in the school’s computer labs by 90 per cent.” (emphasis added)

Good to have something to refer to when I hear: “We don’t have the time to fiddle with this, we need a solution that ‘just works’.”

Also further down in the article, they have a working solution for their whiteboards and document incompatibilities are mentioned. Another proof that raising awareness about open standards beyond Document Freedom Day is really important. BTW, do you already have something planned for next year’s DFD? It should be March 30, 2016.

flattr this!

Galicia introducing over 50 000 students to free software

Being Fellow #952 of FSFE » English | 06:59, Wednesday, 15 July 2015

Galicia introducing over 50 000 students to free software tools, making it part of their 2014-2015 curriculum.

In May Amtega, Galicia’s agency for technological modernisation signed a acontract with the three universities in the region, the Galician Association of Free Software (AGASOL) and six of the region’s free software user groups.

The last paragraph of the article also mentions some changes after the recent elections. Does anybody with more insight can explain to me what this may mean for the future of Free Software in Galicia? Thanks!



flattr this!

Saturday, 04 July 2015

What stops small Open Source businesses selling to UK Government?

Sam Tuke » Free Software | 15:22, Saturday, 04 July 2015

On June 26th Open Source leaders met in London for a gathering called by the Community for Open Interoperability Standards — the UK arm of Open Forum Europe, the European Open Source software advocacy group. At a beautiful venue beside the Museum of London, we were hosted by Worshipful Company of Information Technologists at Barbican station. Ostensibly intended to determine the organisation’s […]

Thursday, 02 July 2015

Continuous integration testing for WordPress plugins on Github using Travis CI

Seravo | 12:04, Thursday, 02 July 2015



We have open sourced and published some plugins on We only publish them to and do the development in Github. Our goal is to keep them simple but effective. Quite a few people are using them actively and some of them have contributed back by creating additional features or fixing bugs/docs. It’s super nice to have contributions from someone else but it’s hard to see if those changes break your existing features. We all do mistakes from time to time and it’s easier to recover if you have good test coverage. Automated integration tests can help you out in these situations.

Choosing Travis CI

As we use for hosting our code and wanted a tool which integrates really well with Github. Travis works seamlessly with Github and it’s free to use in open source projects. Travis gives you ability to run your tests in coordinated environments which you can modify to your preferences.


You need to have a Github account in order to setup Travis for your projects.

How to use

1. Sign up for free Travis account

Just click the link on the page and enter your Github credentials

2. Activate testing in Travis. Go to your accounts page from right corner.


Then go to your Organisation page (or choose a project of your own) and activate the projects you want to be tested in Travis.


3. Add .travis.yml into the root of your project repository. You can use samples from next section.


After you have pushed to Github just wait for couple of seconds and your tests should activate automatically.

Configuring your tests

I think the hardest part of Travis testing is just getting started. That’s why I created testing template for WordPress projects. You can find it in our Github repository. Next I’m going to show you a few different cases of how to use Travis. We are going to split tests into unit tests with PHPUnit and integration tests with RSpec, Poltergeist and PhantomJS.

#1 Example .travis.yml, use Rspec integration tests to make sure your plugin won’t break anything else

This is the easiest way to use Travis with your WordPress plugin. This installs latest WP and activates your plugin. It checks that your frontpage is working and that you can log into admin panel. Just drop this .travis.yml  into your project and start testing :)!

sudo: false
language: php

  on_success: never
  on_failure: change

  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

    - php: nightly

  - git clone wp-tests
  - bash wp-tests/bin/ test root '' localhost $WP_VERSION

  - cd wp-tests/spec && bundle exec rspec test.rb

#2 Example .travis.yml, which uses phpunit and rspec integration tests

  1. Copy phpunit.xml and tests folder from: into your project

  2. Edit tests/bootstrap.php line containing PLUGIN_NAME according to your plugin:

  1. Add .travis.yml file

sudo: false
language: php

  on_success: never
  on_failure: change

  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

    - php: nightly

  # Install composer packages before trying to activate themes or plugins
  # - composer install
  - git clone wp-tests
  - bash wp-tests/bin/ test root '' localhost $WP_VERSION

  - phpunit
  - cd wp-tests/spec && bundle exec rspec test.rb

For this to be useful you need to add the tests according to your plugin.

To get you started see how I did it for our plugins HTTPS Domain Alias & WP-Dashboard-Log-Monitor.

Few useful links:

If you want to contribute for better WordPress testing put an issue or pull request in our WordPress testing template.

Seravo can help you using PHPUnit, Rspec and Travis in your projects,
please feel free to ask us about our WordPress testing via email at or in the comment section below.


Applying the most important lesson for non-developers in Free Software through Roundcube Next

freedom bits | 08:01, Thursday, 02 July 2015

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

Monday, 29 June 2015

The buildscript

Told to blog - Entries tagged fsfe | 17:20, Monday, 29 June 2015

At the start of this month I deployed the new build script on the running test instance of
I'd like to give an overview over its features and limitations. In the end you should be able to understand the build logs on our web server and to test web site changes on your own computer.

General Concept

The new build script (let's call it the 2015 revision) emulates the basic behaviour of the old (around 2002ish revision) of the build script. The rough idea is that the web site starts as a collection of xhtml files, which get turned into html files.

The Main build

A xhtml file on the input side contains a page text, usually a news article or informational text. When it is turned into its corresponding html output, it will be enriched with menu headers, the site footer, tag based cross-links, etc. In essence however it will still be the same article and one xhtml input file normally corresponds to one html output file. The rules for the transition are described using the xslt language. The build script finds the transition rules for each xhtml file in a xsl file. Each xsl file will normally provide rules for a number of pages.

Some xhtml files contain special references which will cause the output to include data from other xhtml and xml files. I.e. the news page will contain headings from all news articles and the front page has some quotes rolling through, which are loaded from a different file.

The build script coordinates the tools which perform the build process. It selects xsl rules for each file, handles different language versions, and the fallback for non-existing translations, collects external files for inclusion into a page, and calls the XSLT processor, RSS generator, etc.

Files like PNG images and PDF documents simply get copied to the output tree.

The pre build

Aside from commiting images, changing XML/XHTML code, and altering inclusion rules, authors have the option to have dynamic content generated at build time. This is mostly used for our PDF leaflets but occasionally comes in handy for other things as well. At different places the source directory contains some files called Makefile. Those are instruction files for the GNU make program, a system used for running compilers and converters to generate output files from source code. A key feature of make is, that it regenerates output files only if their source code has changed since the last generator run.
GNU make is called by the build script prior to its own build run. This goes for both, the 2002 and 2015 revision of the build script. Make itself runs some xslt-based conversions and PDF generators to set up news items for later processing and to build PDF leaflets. The output goes to the websites source tree for later processing by the build script. When building locally you must be careful, not to commit generated files to the SVN repository.

Build times

My development machine "Vulcan" uses relatively lightweighted hardware by contemporary standards: an Intel Celeron 2955U with two Haswell CPU Cores at 1.4 GHz and a SSD for mass storage.
I measured the time for some build runs on this machine, however our web server "Ekeberg" despite running older hardware seems to perform slightly faster. Our future web server "Claus" which isn't yet productively deployed seems to work a little slower. The script performs most tasks multi threaded and can profit greatly from multiple CPU cores.

Pre build

The above mentioned pre build will take a long time when it is first run. However once its output files are set up, they will hardly be ever altered.

Initial pre build on Vulcan: ~38 minutes
Subsequent pre build on Vulcan: < 1 minute

2002 Build script

When the build script is called it first runs the pre build. All timing tests listed here where performed after an initial pre build. This way, as in normal operation, the time required for the pre build has an almost negligible impact on the total build time.

Page rebuild on Vulcan: ~17 minutes
Page rebuild on Ekeberg: ~13 minutes

2015 Build script

The 2015 revision of the build script is written in Shell Script while the 2002 implementation was in Perl. The Perl script used to call the XSLT processor as a library and passed a pre parsed XML tree into the converter. This way it was able to keep a pre parsed version of all source files in cache which was advantageous as it saved reparsing a file which would be included repeatedly. For example news collections are included in differnet places on the site and missing language versions of an article are usually all filled with the same english text version while retaining menu links and page footers in their respective translation.
The shell script does not ever parse an XML tree itself, instead it uses quicker shortcuts for the few XML modifications it has to perform. This means however, that it has to pass raw XML input to the XSLT program, which then has to perform the parsing over and over again. On the plus site this makes operations more atomic from the scripts point of view, and aids in implementing a dependency based build which can save it completely from rebuilding most of the files.

For performing a build, the shell script first calculates a dependency vector in the form of a set of make rules. It then uses make to perform the build. This dependency based build is the basic mode of operation for the 2015 build script.
This can still be tweaked: When the build script updates the source tree from our version control system it can use the list of changes to update the dependency rules generated in a previous build run. In this differential build even the dependency calculation is limited to a minimum with the resulting build time beeing mostly dependent on the actual content changes.

timings taken on the development machine Vulcan
Dependency build, initial run: 60+ minutes
Dependency build, subsequent run: ~9 to ~40 minutes
Differential build: ~2 to ~40 minutes

Local builds

In the simplest case you check out the web page from subversion, choose a target directory for the output and and build from the source directory directly into the target. Note that in the process the build script will create some additional files in the source directory. Ideally all of those files should be ignored by SVN, so they cannot be accidentally commited.

There are two options that will result in additional directories being set up beside the output.

  1. You can set up a status directory, where log files for the build will be placed. In case you are running a full build this is recommendable because it allows you to introspect the build process. The status directory is also required to run the differential builds. If you do not provide a status directory some temporary files will be created in your /tmp directory. The differential builds will then behave identical to the regular dependency builds.
  2. You can set up a stage directory. You will not normally need this feature unless you are building a live web site. When you specify a stage directory, updates to the website will first be generated there and only after the full build the stage directory will be synchronised into the target folder. This way you avoid having a website online that is half deprecated and half updated. Note that even the choosen synchronisation method (rsync) is not fully atomic.

Full build

The full build is best tested with a local web server. You can easily set one up using lighttpd. Set up a config file, e.g. ~/lighttpd.conf:

server.modules = ( "mod_access" )
$HTTP["remoteip"] !~ "" {
  url.access-deny = ("")    # prevent hosting to the network

# change port and document-root accordingly
server.port                     = 5080
server.document-root            = "/home/fsfe/"
server.errorlog                 = "/dev/stdout"
server.dir-listing              = "enable"
dir-listing.encoding            = "utf-8"
index-file.names                = ("index.html", "index.en.html")

include_shell "/usr/share/lighttpd/"

Start the server (I like to run it in foreground mode, so I can watch the error output):

/usr/sbin/lighttpd -Df ~/lighttpd.conf

...and point your browser to http://localhost:5080

Of course you can configure the server to run on any other port. Unless you want to use a port number below 1024 (e.g. the standard for HTTP is port 80) you do not need to start the server as root user.

Finally build the site:

~/fsfe-trunk/build/ -statusdir ~/status/ build_into /home/fsfe/

Testing single pages

Unless you are interested in browsing the entire FSFE website locally, there is a much quicker way, to test changes you make to one particular page, or even to .xsl files. You can build each page individually, exaclty as it would be generated during the complete site update:

~/fsfe-trunk/build/ process_file ~/fsfe-trunk/some-document.en.xhtml > ~/some-document.html

The resulting file can of course be opened directly. However since it will contain references to images and style sheets, it may be useful to test it on a local web server providing the referenced files (that is mostly the look/ and graphics/ directory).

The status directory

There is no elaborate status page yet. Instead we log different parts of the build output to different files. This log output is visible on

File Description
Make_copy The file is part of the generated make rules, it all rules for files that are just copied as they are. The file may be reused in the differential build.
Make_globs Part of the generated make rules. The file contains rules for preprocessing XML file inclusions. It may be reused during the differential build.
Make_sourcecopy Part of the generated make rules. Responsible for copying xhtml files to the source/ directory of the website. May be reused in differential build runs.
Make_xhtml Part of the generated make rules. Contains the main rules for XHTML to HTML transitions. May be reused in differential builds.
Make_xslt Part of generated make rules. Contains rules for tracking interdependencies between XSL files. May be reused in differential builds.
Makefile All make rules. This file is a concatenation of all rule files above. While the differential build regenerates the other Make_-files selectively, this one gets always assembled from the input files which may or may not have been reused in the process. The make program which builds the site uses this file. Note the time stamp: this file is the last one to be written into, before make takes over the build.
SVNchanges List of changes pulled in with the latest SVN update. Unfortunately it gets overwritten with every non-successful update attempt (normally every minute).
SVNerrors SVN error output. Should not contain anything ;-)
buildlog Output of the make program performing the build. Possibly the most valuable source of information when investigating a build failure. The last file to be written to during the make run.
debug Some debugging output of the build system, not too informative because it is used very sparsely.
lasterror Part of the error output. Gets overwritten with every run attempt of the build script.
manifest List of all files which should be contained in the output directory. Gets regenerated for every build along with the Makefile. The list is used for removing obsolete files from the output tree.
premake Output of the make-based pre build. Useful for investigating issues that come up during this stage.
removed List of files that were removed from the output after the last run. That is, files that have been part of a previous website revision but do no longer appear in the manifest.
stagesync Output of rsync when copying from a stage directory to the http root. Basically contains a list of all updated, added, and removed files.


Roughly in that order:

  • move *.sources inclusions from xslt logic to build logic
  • split up translation files
    • both steps will shrink the dependency network and give build times a more favourable tendency
  • deploy on productive site
  • improve status output
  • auto detect build requirements on startup (to aid local use)
  • add support for markdown language in documents
  • add sensible support for other, more distinct, language codes (e.g. pt and pt-br)
  • deploy on DFD site
  • enable the script to remove obsolete directories (not only files)

Friday, 26 June 2015

splitDL – Downloading huge files from slow and unstable internet connections

Max's weblog » English | 15:59, Friday, 26 June 2015

Imagine you want install GNU/Linux but your bandwidth won’t let you…

tl;dr: I wrote a rather small Bash script which splits huge files into several smaller ones and downloads them. To ensure the integrity, every small files is being checked for its hashsum and file size.

That’s the problem I was facing in the past days. In the school I’m working at (Moshi Institute of Technology, MIT) I set up a GNU/Linux server to provide services like file sharing, website design (on local servers to avoid the slow internet) and central backups. The ongoing plan is the setup of 5-10 (and later more) new computers with a GNU/Linux OS in contrast to the ancient and non-free WindowsXP installations – project „Linux Classroom“ is officially born.

But to install an operating system on a computer you need an installation medium. In the school a lot of (dubious) WindowsXP installation CD-ROMs are flying around but no current GNU/Linux. In the first world you would just download an .iso file and ~10 minutes later you could start installing it on your computer.

But not here in Tanzania. With download rates of average 10kb/s it needs a hell of a time to download only one image file (not to mention the costs for the internet usage, ~1-3$ per 1GB). And that’s not all: Periodical power cuts cancel ongoing downloads abruptly. Of course you can restart a download but the large file may be already damaged and you loose even more time.

My solution – splitDL

To circumvent this drawback I coded a rather small Bash program called splitDL. With this helper script, one is able to split a huge file into smaller pieces. If during the download the power cuts off and damages the file, one just has to re-download this single small file instead of the huge complete file. To detect whether a small file is unharmed the script creates hashsums of the original huge and the several small files. The script also supports continuation of the download thanks to the great default built-in application wget.

You might now think „BitTorrent (or any other program) is also able to do the same, if not more!“. Yes, but this requires a) the installation of another program and b) a download source which supports this protocol. On the contrary splitDL can handle every HTTP, HTTPS or FTP download.

The downside in the current state is that splitDL requires shell access to the server where the file is saved to be able to split the file and create the necessary hashsums. So in my current situation I use my own virtual server in Germany on which I download the wanted file with high-speed and then use splitDL to prepare the file for the slow download from my server to the Tanzanian school.

The project is of course still in an ongoing phase and only tested in my own environment. Please feel free to have a look at it and download it via my Git instance. I’m always looking forward to feedback. The application is licensed under GPLv3 or later.

Some examples


Split the file debian.iso into smaller parts with the default options (MD5 hashsum, 10MB size) -m server -f debian.iso

Split the file but use the SHA1 hashsum and split the file into pieces with 50MB. -m server -f debian.iso -c sha1sum -s 50M

After one of the commands, a new folder called dl-debian.iso/ will be created. There, the splitted files and a document containing the hashsums and file sizes are located. You just have to move the folder to a web-accessible location on your server.


Download the splitted files with the default options. -m client -f http://server.tld/dl-debian.iso/

Download the splitted files but use SHA1 hashsum (has to be the same than what was used on the creation process) and override the wget options (default: -nv –show-progress). -m client -f http://server.tld/dl-debian.iso/ -c sha1sum -w --limit-date=100k

Current bugs/drawbacks

  • Currently only single files are possible to split. This will be fixed soon.
  • Currently the script only works with files in the current directory. This is also only a matter of some lines of code.

Wednesday, 24 June 2015

Lots of attention for Oettinger’s transparency problem

Karsten on Free Software | 07:52, Wednesday, 24 June 2015

It seems I’m not the only one interested in who the European Commissioner for Digital Economy and Society, Guenther H. Oettinger, is meeting with.

This morning, Spiegel Online is running a thorough piece (in German, natch). Politico Europe has the story, too. And Transparency International is launching EU Integrity Watch, making it easy to see who in the Commission’s upper echelons has been seeing which interest representatives.

The upshot of all this? Oettinger is meeting almost exclusively with corporate lobbyists, or with people acting on behalf of corporates. According to Spiegel Online’s figures, 90% of the Commissioner’s meetings were with corporate representatives, business organisations, consultancies and law firms. Only 3% of his meetings were with NGOs. Of the top ten organisations he’s meeting with, seven are telecoms companies, most of whom are staunchly opposed to net neutrality.

Oettinger and Commission Vice President Ansip launched the Digital Single Market (DSM) package on May 6, including some last-minute tweaks that pleased broadcasters. During the weeks before the publication of the DSM package, Oettinger met with lots of broadcasters. His office also neglected to publish those meetings until we started pushing for the data to be released. Even in a Commission where three quarters of the meetings involve corporate lobbyists, Oettinger’s one-sidedness sticks out like a sore thumb.

With its decision to publish at least high-level meetings, the Commission has taken a commendable step to shed some light on the flows of influence in Brussels. In this regard, it is far ahead of most national governments other than those of the Scandinavian countries. Now that at least a bit of sunlight is flowing in, it’s easier to see the ugly spots. Oettinger clearly needs to be much more balanced in selecting his meeting partners. His superiors in the Commission need to make sure that he improves.

Thursday, 18 June 2015

Farewell, for now

Karsten on Free Software | 12:07, Thursday, 18 June 2015

This is a blog post that I’m writing with a wistful smile. About two years ago, I decided that I would eventually move on from my role as FSFE’s president. We’ve been preparing the leadership transition ever since. Now the time has come to take one of the larger steps in that process.

Today is my last day actively handling operations at FSFE.

Since our Executive Director Jonas Öberg came on board in March, I have progressively been handing FSFE’s day-to-day management over to him. From tomorrow, our Vice President Matthias Kirschner will take over my responsibility for our policy work.

I’m going to remain FSFE’s president until September. But I’m finally enjoying a luxury that was out of reach in previous years: I’m taking two months of parental leave, until mid-August. At FSFE’s General Assembly in September, we will elect my successor.

It’s been six intense and amazing years since I took over FSFE’s presidency. The organisation has grown a lot, and matured a lot. Our team has worked incredibly hard to promote Free Software, and to put users in control of technology. As I’m preparing to move on, I know that I’m leaving FSFE in great shape, and in very competent hands.

Being FSFE’s president is a great job. I’ve always considered that one of the main perks is the exceptional people I got to work with every day. What they all have in common is that they never give up. Each in their own way, they will bang their heads against the world until the world gives way. It’s time to publicly thank some of them.

Jamie Love is a prime example of that sort of person. Though he might not know it, he has taught me a lot about campaigning for a better, fairer society.

When I took the helm at FSFE, money was tight. We sometimes didn’t know how to pay next month’s salaries, and that was pretty scary. That the organisation the in excellent financial shape today is in no small part due to our Financial Officer Reinhard Müller, with his sound judgement and firm grip of the purse strings.

Carlo Piana and Till Jaeger are great lawyers to have on your side. (Much better than having them against you. Just ask Microsoft.) They have been extremely generous with their time, knowledge and skills.  Shane Coughlan doesn’t so much face down adversity as talk to it gently, pour it a drink of whisky, take it for a walk, and hit it firmly over the head in a dark alley.

I’ve had help from a lot of people in lobbying for Free Software in Brussels. Not all of them would benefit if I thanked them publicly. So I’ll ask Erik Josefsson to stand in for them. He has been a relentless and resourceful advocate for software freedom in the European Parliament, and has received far less gratitude than he’s due.

FSFE’s Legal Coordinator Matija Šuklje is a universal geek in the very best way, and a close friend. He’s about to become a fine lawyer, though if he were to limit himself to lawyering, that would be an awful waste of talent. Fortunately, there’s not too much risk of that happening, as he’s never quite able to keep his nose out of anything that he finds even vaguely interesting.

Having Jonas Öberg take on the Executive Director role has been like putting a new gearbox into the organisation’s machinery. Everything has started to move more smoothly and efficiently.

Matthias Kirschner has been working with me all these years. He has moved through various roles in FSFE, continously taking on more responsibility. He’s had a great part in shaping our success. He is energetic, creative and razor sharp, and a very good friend to me. We couldn’t ask for anyone more skilled and dedicated to take over the role of FSFE’s President a few months from now, and I’m confident that the members of our General Assembly will agree.

These have been very good years for Free Software, for FSFE, and for me personally. As I move to a more supervisory role as a member of FSFE’s General Assembly, I look forward to seeing the seeds grow that we’ve planted..

And with that, I’d like to sign off for the summer. I’m still finalising what my next job is going to be – that’s something I’ll work out during a long vacation with my family, who deserve far more of my time than they’ve been getting.

See you on the other side!

Tuesday, 16 June 2015

You can learn a lot from people’s terminology

Paul Boddie's Free Software-related blog » English | 21:23, Tuesday, 16 June 2015

The Mailpile project has been soliciting feedback on the licensing of their product, but I couldn’t let one of the public responses go by without some remarks. Once upon a time, as many people may remember, a disinformation campaign was run by Microsoft to attempt to scare people away from copyleft licences, employing insensitive terms like “viral” and “cancer”. And so, over a decade later, here we have an article employing the term “viral” liberally to refer to copyleft licences.

Now, as many people already know, copyleft licences are applied to works by their authors so that those wishing to contribute to the further development of those works will do so in a way that preserves the “share-alike” nature of those works. In other words, the recipient of such works promises to extend to others the privileges they experienced themselves upon receiving the work, notably the abilities to see and to change how it functions, and the ability to pass on the work, modified or not, under the same conditions. Such “fair sharing” is intended to ensure that everyone receiving such works may be equal participants in experiencing and improving the work. The original author is asking people to join them in building something that is useful for everyone.

Unfortunately, all this altruism is frowned upon by some individuals and corporations who would prefer to be able to take works, to use, modify and deploy them as they see fit, and to refuse to participate in the social contract that copyleft encourages. Instead, those individuals and corporations would rather keep their own modifications to such works secret, or even go as far as to deny others the ability to understand and change any part of those works whatsoever. In other words, some people want a position of power over their own users or customers: they want the money that their users and customers may offer – the very basis of the viability of their precious business – and in return for that money they will deny their users or customers the opportunity to know even what goes into the product they are getting, never mind giving them the chance to participate in improving it or exercising control over what it does.

From the referenced public response to the licensing survey, I learned another term: “feedstock”. I will admit that I had never seen this term used before in the context of software, or I don’t recall its use in such a way, but it isn’t difficult to transfer the established meaning of the word to the context of software from the perspective of someone portraying copyleft licences as “viral”. I suppose that here we see another divide being erected between people who think they should have most of the power (and who are somehow special) and the grunts who merely provide the fuel for their success: “feedstock” apparently refers to all the software that enables the special people’s revenue-generating products with their “secret ingredients” (or “special sauce” as the author puts it) to exist in the first place.

It should be worrying for anyone spending any time or effort on writing software that by permissively licensing your work it will be treated as mere “feedstock” by people who only appreciate your work as far as they could use it without giving you a second thought. To be fair, the article’s author does encourage contributing back to projects as “good karma and community”, but then again this statement is made in the context of copyleft-licensed projects, and the author spends part of a paragraph bemoaning the chore of finding permissively-licensed projects so as to not have to contribute anything back at all. If you don’t mind working for companies for free and being told that you don’t deserve to see what they did to your own code that they nevertheless couldn’t get by without, maybe a permissive licence is a palatable choice for you, but remember that the permissive licensing will most likely be used to take privileges away from other recipients: those unfortunates who are paying good money won’t get to see how your code works with all the “secret stuff” bolted on, either.

Once upon a time, Bill Gates remarked, “A lot of customers in a sense don’t want — the notion that they would go in and tinker with the source code, that’s the opposite of what’s supposed to go on. We’re supposed to give that to them and it’s our problem to make sure that it works perfectly and they rely on us.” This, of course, comes from a man who enjoys substantial power through accumulation of wealth by various means, many of them involving the denial of choice, control and power to others. It is high time we all stopped listening to people who disempower us at every opportunity so that they can enrich themselves at our expense.

Getting official on Oettinger’s lobbyist meetings [Update]

Karsten on Free Software | 04:22, Tuesday, 16 June 2015

We’ve been looking at how EU Commissioner Günther Oettinger is handling transparency on his meetings with lobbyists. Turns out, not very well. In response to our informal questions, the Commission updated the lists of meetings over the weekend.

There are two lists of meetings with interest representatives. The one for Oettinger’s cabinet (i.e. his team) looks reasonably complete. The list for the Commissioner himself, however, is a different story. Apparently, we are to believe that he has met just six lobbyists in the past four months. That would be an astounding lack of activity for the person in charge of some of the EU’s most contested policy issues: data protection, copyright reform and net neutrality.

[UPDATE: Oettinger's team has continued to add meetings to the list. Both the lists for Oettinger himself and for his team look pretty reasonable now, at least up to the beginning of June. The most recent published meeting was on June 3, almost three weeks ago. There is a significant and unexplained gap in Oettinger's list during April, with just one meeting listed for the period between April 1 and April 20. Oettinger's Head of Cabinet, Michael Hager, has written to me and explained that a long-term sickness leave in the cabinet has led to a delay in publishing the meetings.]

Since the Commission apparently doesn’t fully respond to informal prodding, the excellent Kirsten Fiedler over at EDRi has filed an official request for access to documents. If you, like me, are curious about what’s keeping the EU’s digital czar busy, you can follow the request at

In addition to the full list of meetings, it would be interesting to know what guidelines Oettinger and his team use when it comes to transparency on less formal meetings with lobbyists. Presumably, the Commissioner meets people not just at his office, but also at the events where he frequently appears as a speaker. This is a substantial hole in the Commission’s own transparency policy, and I’d love to know how  Oettinger and his team are planning to fill it.

Friday, 12 June 2015

Oettinger’s transparency problem, part II

Karsten on Free Software | 07:31, Friday, 12 June 2015

On Wednesday, I pointed out that Commissioner Oettinger, who handles matters of digital economy and society for the European Commission, has not been keeping up on publishing his meetings with lobbyists.

Though the Commission’s generic inquiry team hasn’t gotten back to me yet, I though I’d accelerate the process a little. So I’ve now sent a mail to the cabinet of First Vice President Timmermans, who is in charge of coordinating the Commission’s work on transparency:

Dear Ms Sutton,
dear Messrs Timmermans and Hager,

I am writing to you with a question regarding the publication of meetings by Commissioner Oettinger and his cabinet with interest representatives.

When the Commission announced on November 25, 2014, that it would henceforth publish all meetings that the Commissioners and their teams held with interest representatives, this was universally welcomed. Especially civil society organisations like FSFE greeted this announcement as an important step which would serve to increase the trust of European citizens in the Commission.

At the Free Software Foundation Europe, we are highly appreciative of the results this has brought, and we value the efforts made by the Commission to increase transparency through this mechanism.

So it is with some regret that we feel the need to point out that, while most Commissioners and teams publish their meetings with great diligence, some are unfortunately not living up to the expectations set by the Commission’s announcement from November.

Given FSFE’s focus on matters of digital policy, we see a specific need to highlight that Commissioner Oettinger and his cabinet appear to be somewhat behind on publishing their meetings on the relevant web pages.

The last meeting which Commissioner Oettinger has published took place on February 20, 2015:

The most recent published meeting of one of the Commissioner’s team members took place on March 25, 2015:

Given that Commissioner Oettinger is heading up, together with Vice President Ansip, one of the Commission’s flagship initiatives (which, accordingly, is hotly contested), the Digital Single Market, we submit that the highest standards of transparency on meetings should be applied here.

We hope that the Commission, and in particular Commissioner Oettinger and his cabinet, will see themselves able to update the published list of meetings at the earliest opportunity. We believe that a formal Request for Access to Documents is not needed to deal with such a routine matter, and are confident that the Commission will quickly move to rectify the situation.

In closing, please let me reiterate FSFE’s deep appreciation of the Commission’s commitment to transparency.


Karsten Gerloff
President, Free Software Foundation Europe

Let’s see if this brings any results. If not, there’s always the formal request for access to documents. But if we had to resort to this tool in order to remind the Commission of its own commitments, that would amount to a confession of failure on the part of the EC.

Thursday, 11 June 2015

FSFE Fellowship and at Veganmania in Vienna 2015

FSFE Fellowship Vienna » English | 19:10, Thursday, 11 June 2015

Gregor mans the information desk
Martin not yet behind the information desk
René fully engaged
The four meter long information desk

This year’s vegan summer festival in Vienna once more was bigger than ever before. It not only lasted four days but it doubled in size also. Last year 35 exhibitors where present. From Wednesday 3rd to Saturday 6th of June 2015 no less than 70 organisations and companies had set up their stalls in front of the Museumsquartier (MQ), opposite the famous museums of art history and natural history.

But not only the festival itself has got bigger, our already tratitional information stand was also larger. We were given more space and could therefore offer about four meters of tightly packed information material for a total of 50 hours (excluding breaks). Unfortunately, beside me, only Gregor was available from our Fellows to man the stall. He came on Wednesday and Thursday. Luckily this didn’t cause serious problems since we encountered unexpected help from other people later on.

Martin has been using Free Software for quite a while and visited our stalls on different occasions over the years. So I knew that he is very knowledgable about the issues we usually speak to people about. When I asked him for help he instantly shifted his time table and jumped in when I needed to rush into the Radio Orange studio for a live show about the festival itself. Not all people feel comfortable doing the technical side of live radio shows. Even if they are very easy, which is true in this case.

Radio Orange is an interesting subject in its own right. Austria was quite late with liberating the radio licenses. One of the first free radios was Radio Orange (o94) and it is set up completely with Free Software. I am constantly amazed how well this is done. Hundreds of very different people are using its setup on a regular basis. Some are more frequent than others. Some are very computer savy. Others avoid computers altogether. But the obviously very skilled technicans, who built and administrate the radio’s setup, manage to give all these very different people a good experience. I’ve been helping with two shows for quite a time now and everything runs 24/7. People doing their own shows, just enter the live studio, and start talking at the right time. It’s as easy as that. Pre-recorded shows are done similarly effortlessly. It’s possible to upload shows beforehand and they get aired at the right time automatically. Heck, there is even an automatic replacement if someone doesn’t show up or forgot to upload anything. I don’t know any other example of a complicated system with such a wide range of user types running this smoothly. Of course I have encountered glitches from time to time, but they where small and dealt with fast. This is an impressive example how powerful and reliable Free Software can be.

Back to the festival: Martin did a great job manning the stall while I was away for the radio show. When I was back he even stayed longer to support the stall. Many friends of mine visited me on the stall, but there wasn’t much chance to talk to me. I was involved in interesting, engaging conversations about Free Software with normal visitors of our stall virtullay the whole time. So often my friends didn’t even get the chance to talk to me and went away after a while of waiting for me to become available again. Even when more than one person was manning our information desk sometimes people didn’t get the chance to talk to us because there was more demand than we could meet.

On Friday René from Frankfurt, Germany showed up. Originally he had just made the journey to visit the Veganmania festival. He had his luggage with him and got stuck at our desk. In the beginning he just was a normal visitor but after a while he stepped in because there where so many people who wanted to ask questions and he obviously could answer them. In the end he helped with manning our desk until Saturday night. Happy with his competent help, I invited him to stay at my place and we had a great time discussing Free Software ideas until late in the night. So we didn’t get much sleep because we set up the stall about 9:30am each day and stayed there until 10pm. Unfortunately, we got on so well that he missed his train on Sunday. Therefore he had to endure an unpleasant train ride back home without the possibility to sleep.

I appreciate the unexpected help of Martin and René and hope they will stick around. Both ensured me they loved the experience and they want to do it in future again.

As usual, many people got lured to our desk because of the Free Your Android poster. Others just dropped by in order to find out what this all was about since they didn’t expect our subject on a vegan summer festival. But of course it was easy to explain how activism and Free Software relate to each other. In the end we ran out of several leaflets and stickers. In the hot weather we didn’t manage to sell the last of our Fellowship hoodies, but we sold some “there is no cloud …” bags and also received a donation.

The information desk marathon left us with a considerably smaller leaflet stack, brown skin like after two weeks of holidays and many great memories from discussions with very different people. The Veganmania summer festivals in Vienna are clearly worth the effort. We even got explicitly invited to join the vegan summer festival in Graz in September since the organising people figured they wanted to have someone informing about Free Software there also. I guess it is not necessary for us to travel to Graz since I’m told there are dedicated Free Software advocates there too.

Wednesday, 10 June 2015

Oettinger has a transparency problem

Karsten on Free Software | 12:37, Wednesday, 10 June 2015

In November of last year, the European Commission loudly trumpeted a new-found commitment to transparency. In a press release, it said that from now on, all meetings between Commissioners, their team members (the “cabinet”) and interest representatives would be made public. I was always curious how well this promise would hold up in practice.

Not very well, it seems now. The meeting pages for the EU Commissioner for Digital Economy and Society, Günther Oettinger, look rather deserted. If the pages are to be believed, Oettinger last met anyone on February 20, while his cabinet members at least interacted with lobbyists until March 25.

Given that Oettinger appears to be alive and well, I am curious about what’s going on here. As a first step, I have informally contacted the Commission and requested that the pages be updated. If I don’t get a reply within the advertised three business days, I will file a request for access to documents.

Here’s my mail to the Commission:

Dear Madam, Sir,

on November 25, 2014, the Commission issued a press release [] announcing that all meetings by Commissioners and their cabinets would be made public on the Commission’s website.

It appears that in the case of Commissioner Oettinger, the EC has fallen behind somewhat on this commitment.

On the relevant web page, the latest meeting listed for Commissioner Oettinger took place on Feb. 20, 2015. The latest meeting listed for members of his cabinet was on March 25.

You will agree that this state of affairs is not satisfactory with regards to transparency. I would like to request that you provide me – and ideally my fellow citizens – with comprehensive information on the meetings held by Commissioner Oettinger after February 20, 2015, and those held by his cabinet members after March 25, 2015.

While I would be happy to receive this information by email, I would much prefer if the relevant web pages were simply updated to reflect the most recent meetings.

Thank you for your assistance.

With kind regards,
Karsten Gerloff

Tuesday, 09 June 2015

Firefox with Tor/Orbot on Android

Jens Lechtenbörger » English | 19:42, Tuesday, 09 June 2015

In my previous post, I explained three steps for more privacy on the Net, namely (1) opt out from the cloud, (2) encrypt your communication, and (3) anonymize your surfing behavior. If you attempt (3) via Tor on Android devices, you need to be careful.

I was surprised how complicated anonymized browsing is on Android with Firefox and Tor. Be warned! Some believe that Android is simply a dead end for anonymity and privacy, as phones are powerful surveillance devices, easily exploitable by third parties. An excellent post by Mike Perry explains how to harden Android devices.

Anyways, I’m using an Android phone (without Google services as explained elsewhere), and I want to use Tor for the occasional surfing while resisting mass surveillance. Note that my post is unrelated to targeted attacks and espionage.

The Tor port to Android is Orbot, which can potentially be combined with different browsers. In any case, the browser needs to be configured to use Tor/Orbot as proxy. Some browsers need to be configured manually, while others are pre-configured. At the moment, nothing works out of the box, though, as you can see in this thread on the Tor Talk mailing list.

Firefox on Android mostly works with Orbot, but downloads favicons without respecting proxy preferences, which is a known bug. In combination with Tor, this bug is critical, as the download of favicons reveals the real IP address, defeating anonymization.

Some guides for Orbot recommend Orweb, which has too many open issues to be usable. Lightning Browser is also unusable for me. Currently, Orfox is under development (a port of the Tor Browser to Android). Just as plain Firefox, though, Orfox deanonymizes Tor users by downloading favicons without respecting proxy preferences, revealing the real IP address.

The only way of which I’m aware to use Firefox or Orfox with Tor requires the following manual proxy settings, which only work over Wi-Fi.

  1. Connect to your Wi-Fi and configure the connection to use Tor as system proxy: Under the Wi-Fi settings, long-press on your connection, choose “Modify network” → “Show advanced options”. Select “Manual” proxy settings and enter localhost and port 8118 as HTTP proxy. (When you start Orbot, it provides proxy services into the Tor network at port 8118.)

  2. Configure Firefox or Orfox to use the system proxy and avoid DNS requests: Type about:config into the address bar and verify that network.proxy.type is set to 5, which should be the default and lets the browser use the system proxy (the system proxy is also used to fetch favicons). Furthermore, you must set network.proxy.socks_remote_dns to true, which is not the default. Otherwise, the browser leaks DNS requests that reveal your real IP address.

  3. Start Orbot, connect to the Tor network.

  4. Surf anonymized. At the moment you need to configure the browser’s privacy settings to clear private data on exit. Maybe you want to wait for an official Orfox release.

Three steps towards more privacy on the Net

Jens Lechtenbörger » English | 19:06, Tuesday, 09 June 2015

Initially, I wanted to summarize my findings concerning Tor with Firefox on Android. Then, I decided to start with an explanation why I care about Tor at all. The summary, that I had in mind initially, then follows in a subsequent post.

I belong to a species that appears to be on the verge of extinction. My species believes in the value of privacy, also on the Net. We did not yet despair or resign in view of mass surveillance and ubiquitous, surreptitious, nontransparent data brokering. Instead, we made a deliberate decision to resist.

People around us seem to be indifferent to mass surveillance and data brokerage. Recent empirical research indicates that they resign. In consequence, they submit to the destruction of our (their’s and, what they don’t realize, also mine) privacy. I may be an optimist in believing that my species can spread by the proliferation of simple ideas. This is an infection attempt.

Step 1. Opt-out of the cloud and piracy policies.

In this post, I use the term “cloud” as placeholder for convenient, centralized services provided by data brokers from remote data centers. Such services are available for calendar synchronization, file sharing, e-mail and messaging, and I recommend to avoid those services that gain access to “your” data, turn it into their data, generously providing access rights also to you (next to their business partners as well as intelligence agencies and other criminals with access to their infrastructure).

My main advice is simple, if you are interested in privacy: Opt out of the cloud. Do not entrust your private data (e-mails, messages, photos, calendar events, browser history) to untrustworthy parties with incomprehensible terms of service and “privacy” policies. The typical goal of a “privacy” policy is to make you renounce your right to privacy and to allow companies the collection and sale of data treasures based on your data. Thus, you should really think of a “piracy policy” whenever you agree to those terms. (By the way, in German, I prefer “Datenschatzbedingungen” to “Datenschutzbedingungen” towards the same end.)

Opting out of the cloud may be inconvenient, but is necessary and possible. Building on a metaphor that I borrow from Eben Moglen, privacy is an ecological phenomenon. All of us can work jointly towards the improvement of our privacy, or we can pollute our environment, pretending that we don’t know better or that each individual has none or little influence anyways.

While your influence may be small, you are free to choose. You may choose to send e-mails via some data broker. If you make that choice, then you force your friends to send replies intended for your eyes to your data broker, reducing their privacy. Alternatively, you may choose some local, more trustworthy provider. Most likely, good alternatives are available in your country; there certainly are some in Germany such as and Posteo (both were tested positively in February 2015 by Stiftung Warentest; in addition, I’m paying 1€ per month for an account at the former). Messaging is just the same. You are free to contribute to a world-wide, centralized communication monopoly, sustaining the opposite of private communication, or to choose tools and services that allow direct communication with your friends, without data brokers in between. (Or you could use e-mail instead.) Besides, you are free to use alternative search engines such as Startpage (which shows Google results in a privacy friendly manner) or meta search engines such as MetaGer or ixquick.

Step 2. Encrypt your communication.

I don’t think that there is a reason to send unencrypted communication through the Net. Clearly, encryption hinders mass surveillance and data brokering. Learn about e-mail self-defense. Learn about off-the-record (OTR) communication (sample tools at PRISM Break).

Step 3. Anonymize your surfing behavior.

I recommend Tor for anonymized Web surfing to resist mass surveillance by intelligence agencies as well as profiling by data brokers. Mass surveillance and profiling are based on bulk data collection, where it’s easy to see who communicates where and when with whom, potentially about what. It’s probably safe to say that with Tor it is not “easy” any more to see who communicates where and when with whom. Tor users do not offer this information voluntarily, they resist actively.

On desktop PCs, you can just use the Tor Browser, which includes the Tor software itself and a modified version of the Firefox browser, specifically designed to protect your privacy, in particular in view of basic and sophisticated identification techniques (such as cookies and various forms of fingerprinting).

On Android, Tor Browser does not exist, and alternatives need to be configured carefully, which is the topic for the next post.

Monday, 08 June 2015

Setting up a BeagleBone Black to flash coreboot

the_unconventional's blog » English | 13:30, Monday, 08 June 2015

As most readers will know, I like coreboot. Recently, I’ve successfully flashed a T60 and an X200 with a Raspberry Pi, but especially the latter was a cumbersome experience.

That pushed me towards buying a BeagleBone Black: Libreboot’s recommended hardware flasher. However, the BBB does not work out of the box. While Libreboot’s documentation is all right, it did take me a while to get everything to work as intended.


Initial boot

First, connect your BBB’s ethernet port to a router or switch. After the initial boot, it may take a while for all services to start, but eventually it should obtain a DHCP lease. Multicast is probably working on your network, so you can recognize the BBB as beaglebone.local.

Log in as root over SSH. There is no root password. Set a root password.

ssh root@beaglebone.local
passwd root

In case you have a BBB from element14, you’ll most likely have to fix a bug in the init script that prevents you from updating Debian.
(Aren’t init scripts great, systemd-haters? :)

Run nano /etc/init.d/ and change all contents with the following lines:

#!/bin/sh -e
# Provides:
# Required-Start:    $local_fs
# Required-Stop:     $local_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start LED aging
# Description:       Starts LED aging (whatever that is)

x=$(/bin/ps -ef | /bin/grep "[l]ed_acc")
if [ ! -n "$x" -a -x /usr/bin/led_acc ]; then
    /usr/bin/led_acc &

Now, update all packages and reboot the BBB.

apt-get update
apt-get dist-upgrade
systemctl reboot


Setting up spidev

First, create a new file called BB-SPI0-01-00A0.dts.

nano BB-SPI0-01-00A0.dts

Add the following lines and save the file:


/ {
    compatible = "ti,beaglebone", "ti,beaglebone-black";

    /* identification */
    part-number = "spi0pinmux";

    fragment@0 {
        target = <&am33xx_pinmux>;
        __overlay__ {
            spi0_pins_s0: spi0_pins_s0 {
                pinctrl-single,pins = <
                  0x150 0x30  /* spi0_sclk, INPUT_PULLUP | MODE0 */
                  0x154 0x30  /* spi0_d0, INPUT_PULLUP | MODE0 */
                  0x158 0x10  /* spi0_d1, OUTPUT_PULLUP | MODE0 */
                  0x15c 0x10  /* spi0_cs0, OUTPUT_PULLUP | MODE0 */

    fragment@1 {
        target = <&spi0>;
        __overlay__ {
             #address-cells = <1>;
             #size-cells = <0>;

             status = "okay";
             pinctrl-names = "default";
             pinctrl-0 = <&spi0_pins_s0>;

             spidev@0 {
                 spi-max-frequency = <24000000>;
                 reg = <0>;
                 compatible = "linux,spidev";

Then compile it with the device tree compiler.

dtc -O dtb -o BB-SPI0-01-00A0.dtbo -b 0 -@ BB-SPI0-01-00A0.dts

And copy it to /lib/firmware.

cp BB-SPI0-01-00A0.dtbo /lib/firmware/

Now enable the device tree overlay.

echo BB-SPI0-01 > /sys/devices/bone_capemgr.*/slots

Finally, set it up permanently by running nano /etc/default/capemgr and adding/changing the CAPE= line:



Getting Flashrom

Download the latest libreboot_util archive and extract it.

tar -xf libreboot_util.tar.xz

Go to the flashrom/armv7l directory and make flashrom executable.

cd libreboot_util/flashrom/armv7l/
chmod +x flashrom

Test Flashrom. (It should output an error, because it’s not connected to an EEPROM.)

./flashrom -p linux_spi:dev=/dev/spidev1.0,spispeed=512


Wiring up

If you’re like me and previously used a Raspberry Pi, you’ll have to make some modifications to your wires. The Raspberry Pi has male headers, while the BeagleBone Black has female headers.

Due to outside interference, it’s best to keep the wires as short as possible.

I had a flatcable for my RPi, so first I cut off some same-coloured wires and soldered them together at a reasonable length. Don’t forget to heat shrink everything.
(Skip this step if you already have F-F wires of reasonable length.)

If you have them, it's really handy to put numbered labels on both ends.

If you have F-F wires, one end will need to get a sex change*. I used some steel wire and soldered it into the female connectors. Then I added more heat shrink tubing.

Just add a drip of tin and let them cool off.

* No GNOME developers were harmed in the process.

They should end up looking something like this. Use a multimeter to make sure the wires conduct properly!

Now we have the BBB wires, it’s time to make PSU wires.

I took these wires from an old PC speaker.

You can use regular indoor electrical wires and add a female header to the brown wire and a male header to the blue wire. These wires do not have to be very short. In fact, it’s easier to keep them a bit longer.

Solder brown to red and blue to black. Then heat shrink as usual.

For the black wire, you’ll have to do another sex change if you used a female connector. The red wire goes on the clip, so leave that one as it is.


Pinouts and power supplies

Add the male wires to the BBB’s P9 header following this diagram:

If you touch it, it gets bigger!

Although the BBB has its own 3.3V rail, it’s considered best practice to use an external power supply. You can easily use a regular ATX or SFX PSU for this.

I'm using an SFX PSU for easier transportation.

All you have to do is hotwire it by shorting the green wire with one of the ground wires. This can be done with any conductive wire. (Like a paperclip op a hairpin.)

Short pins 4 (green) and 5 (black) on the top row.

After that, you can connect the +3.3V pin to one of the orange/brown wires and the GND pin to one of the black wires. I’d recommend using pins 2 and 3.

Only connect the brown wire to one of the +3.3V pins. Nothing else! Yellow, red and/or purple pins will KILL EVERYTHING.

You may want to add some tin and some heat shrink tubing on the ends of the electrical wires to make them clamp into the ATX plug.


A slight fluctuation (<100mV) isn't going to be harmful.


Once you’ve triple checked everything, connect the PSU’s +3.3V pin to the SOIC clip and the PSU’s GND to pin #2 on the BBB.

It ends up looking something like this.

Boot up the BBB and then power up the PSU. If nothing catches fire, it’s time to do a victory dance!

Quick Look: Dell XPS 13 Developer Edition (2015) with Ubuntu 14.04 LTS

Losca | 16:55, Monday, 08 June 2015

I recently obtained the newest Dell's Ubuntu developer offering, XPS 13 (2015, model 9343). I opted in for FullHD non-touch display, mostly because of better battery life, the actual no need for higher resolution, and matte screen which is great outside. Touch would have been "nice-to-have", but in my work I don't really need it.

The other specifications include i7-5600U CPU, 8GB RAM, 256GB SSD [edit: lshw], and of course Ubuntu 14.04 LTS pre-installed as OEM specific installation. It was not possible to directly order it from Dell site, as Finland is reportedly not online market for Dell... The wholesale company however managed to get two models on their lists and so it's now possible to order via retailers. [edit: here are some country specific direct web order links however US, DE, FR, SE, NL]

In this blog post I give a quick look on how I started up using it, and do a few observations on the pre-installed Ubuntu included. I personally was interested in using the pre-installed Ubuntu like a non-Debian/Ubuntu developer would use it, but Dell has also provided instructions for Ubuntu 15.04, Debian 7.0 and Debian 8.0 advanced users among else. Even if not using the pre-installed Ubuntu, the benefit from buying an Ubuntu laptop is obviously smaller cost and on the other hand contributing to free software (by paying for the hardware enablement engineering done by or purchased by Dell).


The Black Box. (and white cat)

Opened box.

First time lid opened, no dust here yet!
First time boot up, transitioning from the boot logo to a first time Ubuntu video.
A small clip from the end of the welcoming video.
First time setup. Language, Dell EULA, connecting to WiFi, location, keyboard, user+password.
Creating recovery media. I opted not to do this as I had happened to read that it's highly recommended to install upgrades first, including to this tool.
Finalizing setup.
Ready to log in!
It's alive!
Not so recent 14.04 LTS image... lots of updates.

Problems in the First Batch

Unfortunately the first batch of XPS 13:s with Ubuntu are going to ship with some problems. They're easy to fix if you know how to, but it's sad that they're there to begin with in the factory image. There is no knowledge when a fixed batch will start shipping - July maybe?

First of all, installing software upgrades stops. You need to run the following command via Dash → Terminal once: sudo apt-get install -f (it suggests upgrading libc-dev-bin, libc6-dbg, libc6-dev and udev). After that you can continue running Software Updater as usual, maybe rebooting in between.

Secondly, the fixed touchpad driver is included but not enabled by default. You need to enable the only non-enabled ”Additional Driver” as seen in the picture below or instructed in Youtube.

Dialog enabling the touchpad driver.

Clarification: you can safely ignore the two paragraphs below, they're just for advanced users like me who want to play with upgraded driver stacks.

Optionally, since I'm interested in the latest graphics drivers especially in case of a brand new hardware like Intel Broadwell, I upgraded my Ubuntu to use the 14.04.2 Hardware Enablement stack (matches 14.10 hardware support): sudo apt install --install-recommends libgles2-mesa-lts-utopic libglapi-mesa-lts-utopic linux-generic-lts-utopic xserver-xorg-lts-utopic libgl1-mesa-dri-lts-utopic libegl1-mesa-drivers-lts-utopic libgl1-mesa-glx-lts-utopic:i386

Even though it's much better than a normal Ubuntu 14.10 would be since many of the Dell fixes continue to be in use, some functionality might become worse compared to the pre-installed stack. The only thing I have noticed though is the internal microphone not working anymore out-of-the-box, requiring a kernel patch as mentioned in Dell's notes. This is not a surprise since the real eventual upstream support involves switching from HDA to I2S and during 14.10 kernel work that was not nearly done. If you're excited about new drivers, I'd recommend waiting until August when the 15.04 based 14.04.3 stack is available (same package names, but 'vivid' instead of 'utopic'). [edit: I couldn't resist myself when I saw linux-generic-lts-vivid (3.19 kernel) is already in the archives. 14.04.2 + that gives me working microphone again!]


Dell XPS 13 Developer Edition with Ubuntu 14.04 LTS is an extremely capable laptop + OS combination nearing perfection, but not quite there because of the software problems in the launch pre-install image. The laptop looks great, feels like a quality product should and is very compact for the screen size.

I've moved over all my work onto it and everything so far is working smoothly in my day-to-day tasks. I'm staying at Ubuntu 14.04 LTS and using my previous LXC configuration to run the latest Ubuntu and Debian development versions. I've also done some interesting changes already like LUKS In-Place Conversion, converting the pre-installed Ubuntu into whole disk encrypted one (not recommended for the faint hearted, GRUB reconfiguration is a bit of a pain).

I look happily forward to working a few productive years with this one!

Saturday, 06 June 2015

Flashing Libreboot on an X200 with a Raspberry Pi

the_unconventional's blog » English | 15:30, Saturday, 06 June 2015

A couple of weeks ago, someone kindly donated a ThinkPad X200 for me to practice flashing coreboot on. As the X200 has an Intel GPU, it is able to run Libreboot; the 100% Free, pre-compiled downstream of the coreboot project.

Unfortunately, the initial Libreboot flash cannot be done internally, so you have to open the laptop up. It’s not very complicated though. You only have to remove the palm rest, because the flash chip is at the top of the motherboard.

I bought another SOIC clip online (a 16-pin one this time), and I wired up the six required wires to the Raspberry Pi. It’s probably best to get a numbered GPIO cable so you’ll only have to mess about with the wires on one side.

The X200's flash chip is very susceptible to interference, so you might want to make twisted pairs.

The chip’s pins are numbered counter-clockwise starting at the bottom right. So the row near the GPU consists of pins 1 to 8, and the row near the cardbus consists of pins 9 to 16.

Picture courtesy of 'snuffeluffegus'.

The pinout is like this:

2  - 3.3V    - Pi Pin #1
7  - CS#     - Pi Pin #24
8  - S0/SIO1 - Pi Pin #21
10 - GND     - Pi Pin #25
15 - S1/SIO0 - Pi Pin #19
16 - SCLK    - Pi Pin #23


Preparing the ROM

Libreboot’s ROMs are almost completely suitable to flash without intervention. The only thing you have to add is the MAC address of your ethernet controller. Libreboot’s documentation explains how this is done.

You will need to download the X200 ROMs (most likely the 8MB versions: 4MB is rare) and libreboot_util from the Libreboot download site and extract both those xz archives in a working directory. For instance, ~/Downloads/libreboot/.

To put it as simply as possible, first you choose a ROM based on your keyboard layout.

kevin@vanadium:~/Downloads/libreboot/x200_8mb$ ls

Then you copy that file to the ich9gen directory, you rename it to libreboot.rom and  you change to that directory.

cp ~/Downloads/libreboot/x200_8mb/x200_8mb_usdvorak_vesafb.rom
cd ~/Downloads/libreboot/libreboot_util/ich9deblob/x86_64/

Now look at the sticker on the bottom of your X200 and write down (or memorize) the MAC address. If you can’t find a sticker there, another one is underneath the RAM.

Create a descriptor with your MAC address by running this command:

./ich9gen --macaddress 00:1F:00:00:AA:AA
(Of course, use your own MAC address instead of this example.)

Then add that descriptor to the ROM by running this command:

dd if=ich9fdgbe_8m.bin of=libreboot.rom bs=1 count=12k conv=notrunc

Note that you have to use the 4MB version in the rare case that you have a 4MB BIOS chip.

Now put the libreboot.rom image on your Raspberry Pi.

scp libreboot.rom root@raspberry-pi.local:/root/libreboot/
(This is just an example using SFTP. Your situation may be different.)


Setting up the Raspberry Pi

You will have to update the Raspberry Pi’s kernel to the latest version (3.18) in order to get the X200′s chip to work somewhat reliably over spidev. (But it never really seems to be stable.)

sudo rpi-update

Then, download the Flashrom source code and compile it for armhf.

sudo apt-get install build-essential pciutils usbutils libpci-dev libusb-dev libftdi-dev zlib1g-dev subversion
svn co svn:// flashrom
cd flashrom

If you’re not in the mood to compile it yourself on a slow Raspberry Pi, I have precompiled armv6l Flashrom binaries here. See above for obtaining the source code.

Reading the chip took a lot of attempts before I started getting hash sum matches. In fact, I was close to giving up entirely because of the many unexplainable failed attempts.

./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -r romread1.rom
./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -r romread2.rom
./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -r romread3.rom

sha512sum romread*.rom

(All checksums must be identical.)

To be honest, I have no idea what I did exactly to get it working eventually. All of a sudden, the checksums started matching three times in a row, even though I was on another floor, using SSH, and didn’t touch or change anything.

So after three successful reads, I just crossed my fingers and flashed the ROM.

./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c "MX25L6405D" -w ../libreboot.rom

And I bricked the laptop…

Uh oh. Erase/write failed. Checking if anything changed.
Your flash chip is in an unknown state.

I booted it up, and everything remained black.

After crying for three hours straight, I tried flashing once more with spispeed=128, and then it worked. I got a “VERIFIED” at the end and it booted to GRUB.

./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=128 -c "MX25L6405D" -w ../libreboot.rom


Hello GRUB.

GRUB easily allowed me to load the Debian netinstaller. All that was left to do was to install an Atheros AR9280 WiFi card from my collection, and I now have a working X200 with Libreboot.

Ironically, you need to have a USB drive with a UEFI-menu for GRUB to boot from it.

Afterward, I tried some more test reads (spispeed varying between 128 and 1024), and I’d still be getting checksum mismatches at least one out of five times (sometimes more). So I still feel like the Raspberry Pi is quite volatile, and I’m curious whether the BBB is just as ‘hit and miss’.

I gave this X200 to my grandfather, who now uses it as his daily work machine. (He’s an accountant.) There are no hardware issues with Libreboot, other than the undock button on the docking station not working. But I already thought of a fix for that.

It's now docked and running Ubuntu 14.04 LTS with Linux-libre.

In the end, I concluded that flashing the X200 with a Raspberry Pi is quite a chore. It’s possible, but it seems to be dependent on the weather. The failure rate is easily 80%, although you will probably get it to work eventually.

Thankfully, after flashing Libreboot the first time, subsequent updates can be done internally with flashrom -p internal or simply Libreboot’s ./flash script.

Thursday, 04 June 2015

Ubuntu Phone review by a non-geek

Seravo | 06:05, Thursday, 04 June 2015



Few weeks ago I found a pretty black box waiting on my desk at the office. There it was, the BQ Aquaris E4.5, Ubuntu edition. Now available for sale all over Europe, the world’s first Ubuntu phone had arrived to the eager hands of Seravo. (Working in an open office with a bunch of other companies dealing more or less with IT, one can now easily get attention by not just talking on the phone but about it, too.)

The Ubuntu phone has been developed for a while, and now it has found its first users and can really be reviewed in practice. Can Ubuntu handle the expectations and demands of a modern mobile user? My personal answer, after getting to know my phone and see what it can and cannot do, is yes, but not yet.

But let’s get back to the pretty black box. For a visual (and not-that-technical) person such as myself, the amount of thought put into the design of the Ubuntu phone is very pleasing. The mere layout of the packaging of the box is very nice, both to the eye and from the point of view of usability. The same goes (at least partly) for the phone and its operating system itself: the developers themselves claim that “Ubuntu Phone has been designed with obsessive attention to detail” and that “form follows function throughout”. So it is not only the box that is pretty.



Swiping through the scopes

When getting familiar with the Ubuntu phone, one can simply follow clear insctructions to get the most relevant settings in place. A nice surprise was that the system has been translated into Finnish – and to a whole bunch of other odd languages ranging from Catalan to Uyghur.

The Ubuntu phone tries to minimalize the effort of browsing through several apps, and introduces the scopes. “Ubuntu’s scopes are like individual home screens for different kinds of content, giving you access to everything from movies and music to local services and social media, without having to go through individual apps.” This is a fine idea, and works to a certain point. I myself would have though appreciated an easier way to adjust and modify my scopes, so that they would indeed serve my everyday needs. It is for instance not possible to change the location for the Today section, so my phone still thinks that I’m interested in the weather and events near Helsinki (which is not the case, as my hometown Tampere is lightyears or at least 160 kilometers away from the capital).

Overall, swiping is the thing with the Ubuntu phone. One can swipe from the left, swipe from the right, swipe from the top and the bottom and all through the night, never finding a button to push in order to get to the home screen. There are no such things as home buttons or home screens. This acquires practice until one gets familiar with it.



Designed for the enthusiasts

A friend of mine once said that in order to really succeed, ecological products must be able to compete with non-ecological ones in usability – or at times even beat them in that area. A green product that does not work, can never achieve popularity. The same though can be applied to open source products as well: as the standard is high, the philosophy itself is not enough if the end product fails to do what it should.

This thought in mind, I was happy to notice that the Ubuntu phone is not only new and exiting, but also pretty usable in everyday work. There are, though, bugs and lacks of features and some pretty relevant apps missing from the selection. For services like Facebook, Twitter, Google+ or Google Maps, Ubuntu phone uses web apps. If one is addicted to Instagram or WhatsApp, one should still wait until purchasing an Ubuntu phone. Telegram, a nice alternative for instant messaging is though available, and so is the possibility to view one’s Instagram feed. It also remains a mystery to me what benefits sigining in to Ubuntu One can bring to the user – except for updates, which are indeed longed for.

To conclude, I would state that at this point the Ubuntu phone is designed for the enthusiasts and developers, and should keep on evolving to become popular with the masses. The underlying idea of open source should of course be supported, and it is expected to see the Ubuntu phone develop in the near future. Hopefully the upcoming updates will fix the most relevant bugs and the app selection will fill the needs of an average mobile phone user.


Read more about the Ubuntu Phone.

Tuesday, 02 June 2015

Count downs: T -10 hours, -12 days, -30 days, -95 days

Mario Fux | 21:10, Tuesday, 02 June 2015

It’s already for quite some time that I wanted to write this blog post and as soon one of the fundraisers I’d like to mention is over I finally took the time to write this now:

So the first fundraiser I’d like to write about is the Make Krita faster than Photoshop Kickstarter campaign. It’s almost over and is already a success but that doesn’t mean you can’t still become a supporter of this awesome painting application. And for the case you shouldn’t have seen it there was a series of interviews with Krita users (and thus users of KDE software) you should have read at least in part.

The second crowd funding campaign I’d like to mention is about the board game Heldentaufe. It’s a bit a family thing as this campaign (and thus the board game) is mostly done by a brother-in-law of mine. He worked on this project for several years – it started as his master thesis. And I must say it looks really nice (don’t know if the French artist used Krita as well) and is “simple to learn, but difficult to master”. So if you like board games go and support it.

And the third fundraiser it’d like to talk about is one of our friends from Kolab. They plan to refactor and improve one of the most successful pieces of webmail software. And as everybody here should be aware how important email is, I hope that every reader of this blog post will go to their Indiegogo page and give at least 10$.

So some of you might ask now: and what about the -95 days? In 95 days the 6th edition of the Randa Meetings will start. And as I’m sure it will become a very successful edition again and a lot of people want to come to Randa and work there as hard as they can and we want to help them with sponsoring their travel costs we plan another fundraiser for this and other KDE sprints in general. So if you would like to help us don’t hesitate and write me an email (fux AT kde org) or ping me on IRC.

UPDATE: As the first comment mentions the Heldentaufe Kickstarter was cancelled this morning and you can read about the reason on the latest update. But I’m optimistic that there will be a second fundraiser campaign in the future and if you’re interested about it don’t hesitate to write me an email and I’ll ping you when the new campaign starts.

flattr this!

Monday, 01 June 2015

Free Software in Education Keynote at DORS/CLUC

Being Fellow #952 of FSFE » English | 15:22, Monday, 01 June 2015

I haven’t informed you about recent education news as I was busy preparing for talks about the same subject. The latest was at DORS/CLUC in Zagreb, a traditional FS event that celebrated its 22nd version! Wow, the only computing device I owned back then was a calculator when the first DORS/CLUG came to light.

I put the slides of the talk here and I hope that the video recording will be online soon as well.

Guido at DORC/CLUC

me at DORC/CLUC (by Sam Tuke)

I was happy to meet former FSFE employee Sam Tuke again who is saving the world now at Collabora and the Document Foundation. His keynote about the progress with LibreOffice was pretty impressive.

I also had the pleasure to meet Gijs Hillenius again. I wouldn’t be aware of most of the news I usually write about without him. Unfortunately, I couldn’t attend his keynote as I had to leave on the same day.

I talked to people from an edu project in Croatia. I attended their talk even though I can’t understand Croatian, but it was enough to establish a first contact and we will stay in touch.

There is not much left to say but Thank you to Svebor from HrOpen and the rest of the crew who managed an awesome event! The only downside of that trip to Zagreb was that it was so short! But I think I got out most of it as I was even able to get into the sea during my 2 hours stop over in Split and instead of waiting 10 minutes on a cab at the venue, I had a little walk through Zagreb to the main station and had a glance at this beautiful city.  :)

I was told how nice Croatians are before I went and can now confirm that this is true. I’m looking forward to the next DORS/CLUC!

flattr this!

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Interviews  FSFE Fellowship Vienna » English  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  I LOVE IT HERE » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Jonas Öberg  Karsten on Free Software  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Paul Boddie's Free Software-related blog » English  Pressreview  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Sam Tuke » Free Software  Sam Tuke's blog  Seravo  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes» Free Software  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos  pb's blog » en  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog