Planet Fellowship (en)

Thursday, 01 December 2016

Using a fully free OS for devices in the home - fsfe | 13:11, Thursday, 01 December 2016

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?

Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Monday, 28 November 2016

Freed-Ora 25 released

Marcus's Blog | 11:13, Monday, 28 November 2016

Freed-Ora is a Libre version of Fedora GNU/Linux which comes with the Linux-libre kernel and the Icecat browser.
The procedure for creating Live media has changed within Fedora and the image have been build using livemedia-creator instead of livecd-creator. The documentation on how to build the image can be found here.

Please download the Freed-Ora 25 ISO and give feedback.

Friday, 25 November 2016

vmdebootstrap Sprint Report

Iain R. Learmonth | 12:06, Friday, 25 November 2016

This is now a little overdue, but here it is. On the 10th and 11th of November, the second vmdebootstrap sprint took place. Lars Wirzenius (liw), Ana Custura (ana_c) and myself were present. liw focussed on the core of vmdebootstrap, where he sketched out what the future of vmdebootstrap may look like. He documented this in a mailing list post and also presented (video).

Ana and myself worked on live-wrapper, which uses vmdebootstrap internally for the squashfs generation. I worked on improving logging, using a better method for getting paths within the image, enabling generation of Packages and Release files for the image archive and also made the images installable (live-wrapper 0.5 onwards will include an installer by default).

Ana worked on the inclusion of HDT and memtest86+ in the live images and enabled both ISOLINUX (for BIOS boot) and GRUB (for EFI boot) to boot the text-mode and graphical installers.

live-wrapper 0.5 was released on the 16th November with these fixes included. You can find live-wrapper documentation at (The documentation still needs some work, some options may be incorrectly described).

Thanks to the sponsors that made this work possible. You’re awesome. (:

Monday, 21 November 2016

21 November 1916 - fsfe | 18:31, Monday, 21 November 2016

There has been a lot of news recently about the 100th anniversaries of various events that took place during the Great War.

On 21 November 1916, the SS Hunscraft sailed from Southampton to France. My great grandfather, Robert Pocock, was aboard.

He was part of the Australian Imperial Force, 3rd Divisional Train.

It's sad that Australians had to travel half way around the world to break up fist fights and tank battles. Sadder still that some people who romanticize the mistakes of imperialism are being appointment to significant positions of power.

Fortunately my great grandfather returned to Australia in one piece, many Australians didn't.

Robert Pocock's war medals

Sunday, 20 November 2016

On Not Liking Computers

Paul Boddie's Free Software-related blog » English | 23:57, Sunday, 20 November 2016

Adam Williamson recently wrote about how he no longer really likes computers. This attracted many responses from people who misunderstood him and decided to dispense career advice, including doses of the usual material about “following one’s passion” or “changing one’s direction” (which usually involves becoming some kind of “global nomad”), which do make me wonder how some of these people actually pay their bills. Do they have a wealthy spouse or wealthy parents or “an inheritance”, or do they just do lucrative contracting for random entities whose nature or identities remain deliberately obscure to avoid thinking about where the money for those jobs really comes from? Particularly the latter would be the “global nomad” way, as far as I can tell.

But anyway, Adam appears to like his job: it’s just that he isn’t interested in technological pursuits outside working hours. At some level, I think we can all sympathise with that. For those of us who have similarly pessimistic views about computing, it’s worth presenting a list of reasons why we might not be so enthusiastic about technology any more, particularly for those of us who also care about the ethical dimensions, not merely whether the technology itself is “any good” or whether it provides a sufficient intellectual challenge. By the way, this is my own list: I don’t know Adam from, well, Adam!

Lack of Actual Progress

One may be getting older and noticing that the same technological problems keep occurring again and again, never getting resolved, while seeing people with no sense of history provoke change for change’s – not progress’s – sake. After a while, or when one gets to a certain age, one expects technology to just work and that people might have figured out how to get things to communicate with each other, or whatever, by building on what went before. But then it usually seems to be the case that some boy genius or other wanted a clear run at solving such problems from scratch, developing lots of flashy features but not the mundane reliability that everybody really wanted.

People then get told that such “advanced” technology is necessarily complicated. Whereas once upon a time, you could pick up a telephone, dial a number, have someone answer, and conduct a half-decent conversation, now you have to make sure that the equipment is all connected up properly, that all the configurations are correct, that the Internet provider isn’t short-changing you or trying to suppress your network traffic. And then you might dial and not get through, or you might have the call mysteriously cut out, or the audio quality might be like interviewing a gang of squabbling squirrels speaking from the bottom of a dustbin/trashcan.

Depreciating Qualifications

One may be seeing a profession that requires a fair amount of educational investment – which, thanks to inept/corrupt politicians, also means a fair amount of financial investment – become devalued to the point that its practitioners are regarded as interchangeable commodities who can be coerced into working for as little as possible. So much for the “knowledge economy” when its practitioners risk ending up earning less than people doing so-called “menial” work who didn’t need to go through a thorough higher education or keep up an ongoing process of self-improvement to remain “relevant”. (Not that there’s anything wrong with “menial” work: without people doing unfashionable jobs, everything would grind to a halt very quickly, whereas quite a few things I’ve done might as well not exist, so little difference they made to anything.)

Now we get told that programming really will be the domain of “artificial intelligence” this time around. That instead of humans writing code, “high priests” will merely direct computers to write the software they need. Of course, such stuff sounds great in Wired magazine and rather amusing to anyone with any actual experience of software projects. Unfortunately, politicians (and other “thought leaders”) read such things one day and then slash away at budgets the next. And in a decade’s time, we’ll be suffering the same “debate” about a lack of “engineering talent” with the same “insights” from the usual gaggle of patent lobbyists and vested interests.

Neoliberal Fantasy Economics

One may have encountered the “internship” culture where as many people as possible try to get programmers and others in the industry to work for nothing, making them feel as if they need to do so in order to prove their worth for a hypothetical employment position or to demonstrate that they are truly committed to some corporate-aligned goal. One reads or hears people advocating involvement in “open source” not to uphold the four freedoms (to use, share, modify and distribute software), but instead to persuade others to “get on the radar” of an employer whose code has been licensed as Free Software (or something pretending to be so) largely to get people to work for them for free.

Now, I do like the idea of employers getting to know potential employees by interacting in a Free Software project, but it should really only occur when the potential employee is already doing something they want to do because it interests them and is in their interests. And no-one should be persuaded into doing work for free on the vague understanding that they might get hired for doing so.

The Expendable Volunteers

One may have seen the exploitation of volunteer effort where people are made to feel that they should “step up” for the benefit of something they believe in, often requiring volunteers to sacrifice their own time and money to do such free work, and often seeing those volunteers being encouraged to give money directly to the cause, as if all their other efforts were not substantial contributions in themselves. While striving to make a difference around the edges of their own lives, volunteers are often working in opposition to well-resourced organisations whose employees have the luxury of countering such volunteer efforts on a full-time basis and with a nice salary. Those people can go home in the evenings and at weekends and tune it all out if they want to.

No wonder volunteers burn out or decide that they just don’t have time or aren’t sufficiently motivated any more. The sad thing is that some organisations ignore this phenomenon because there are plenty of new volunteers wanting to “get active” and “be visible”, perhaps as a way of marketing themselves. Then again, some communities are content to alienate existing users if they can instead attract the mythical “10x” influx of new users to take their place, so we shouldn’t really be surprised, I suppose.

Blame the Powerless

One may be exposed to the culture that if you care about injustices or wrongs then bad or unfortunate situations are your responsibility even if you had nothing to do with their creation. This culture pervades society and allows the powerful to do what they like, to then make everyone else feel bad about the consequences, and to virtually force people to just accept the results if they don’t have the energy at the end of a busy day to do the legwork of bringing people to account.

So, those of us with any kind of conscience at all might already be supporting people trying to do the right thing like helping others, holding people to account, protecting the vulnerable, and so on. But at the same time, we aren’t short of people – particularly in the media and in politics – telling us how bad things are, with an air of expectation that we might take responsibility for something supposedly done on our behalf that has had grave consequences. (The invasion and bombing of foreign lands is one depressingly recurring example.) Sadly, the feeling of powerlessness many people have, as the powerful go round doing what they like regardless, is exploited by the usual cynical “divide and rule” tactics of other powerful people who merely see the opportunities in the misuse of power and the misery it causes. And so, selfishness and tribalism proliferate, demotivating anyone wanting the world to become a better place.

Reversal of Liberties

One may have had the realisation that technology is no longer merely about creating opportunities or making things easier, but is increasingly about controlling and monitoring people and making things complicated and difficult. That sustainability is sacrificed so that companies can cultivate recurring and rich profit opportunities by making people dependent on obsolete products that must be replaced regularly. And that technology exacerbates societal ills rather than helping to eradicate them.

We have the modern Web whose average site wants to “dial out” to a cast of recurring players – tracking sites, content distribution networks (providing advertising more often than not), font resources, image resources, script resources – all of which contribute to making the “signal-to-noise” ratio of the delivered content smaller and smaller all the time. Where everything has to maintain a channel of communication to random servers to constantly update them about what the user is doing, where they spent most of their time, what they looked at and what they clicked on. All of this requiring hundreds of megabytes of program code and data, burning up CPU time, wasting energy, making computers slow and steadily obsolete, forcing people to throw things away and to buy more things to throw away soon enough.

We have the “app” ecosystem experience, with restrictions on access, competition and interoperability, with arbitrarily-curated content: the walled gardens that the likes of Apple and Microsoft failed to impose on everybody at the dawn of the “consumer Internet” but do so now under the pretences of convenience and safety. We have social networking empires that serve fake news to each person’s little echo chamber, whipping up bubbles of hate and distracting people from what is really going on in the world and what should really matter. We have “cloud” services that often offer mediocre user experiences but which offer access from “any device”, with users opting in to both the convenience of being able to get their messages or files from their phone and the surveillance built into such services for commercial and governmental exploitation.

We have planned obsolescence designed into software and hardware, with customers obliged to buy new products to keep doing the things they want to do with those products and to keep it a relatively secure experience. And we have dodgy batteries sealed into devices, with the obligation apparently falling on the customers themselves to look after their own safety and – when the product fails – the impact of that product on the environment. By burdening the hapless user of technology with so many caveats that their life becomes dominated by them, those things become a form of tyranny, too.

Finding Meaning

Many people need to find meaning in their work and to feel that their work aligns with their own priorities. Some people might be able to do work that is unchallenging or uninteresting and then pursue their interests and goals in their own time, but this may be discouraging and demotivating over the longer term. When people’s work is not orthogonal to their own beliefs and interests but instead actively undermines them, the result is counterproductive and even damaging to those beliefs and interests and to others who share them.

For example, developing proprietary software or services in a full-time job, although potentially intellectually challenging, is likely to undermine any realistic level of commitment in one’s own free time to Free Software that does the same thing. Some people may prioritise a stimulating job over the things they believe in, feeling that their work still benefits others in a different way. Others may feel that they are betraying Free Software users by making people reliant on proprietary software and causing interoperability problems when those proprietary software users start assuming that everything should revolve around them, their tools, their data, and their expectations.

Although Adam wasn’t framing this shift in perspectives in terms of his job or career, it might have an impact on some people in that regard. I sometimes think of the interactions between my personal priorities and my career. Indeed, the way that Adam can seemingly stash his technological pursuits within the confines of his day job, while leaving the rest of his time for other things, was some kind of vision that I once had for studying and practising computer science. I think he is rather lucky in that his employer’s interests and his own are aligned sufficiently for him to be able to consider his workplace a venue for furthering those interests, doing so sufficiently to not need to try and make up the difference at home.

We live in an era of computational abundance and yet so much of that abundance is applied ineffectively and inappropriately. I wish I had a concise solution to the complicated equation involving technology and its effects on our quality of life, if not for the application of technology in society in general, then at least for individuals, and not least for myself. Maybe a future article needs to consider what we should expect from technology, as its application spreads ever wider, such that the technology we use and experience upholds our rights and expectations as human beings instead of undermining and marginalising them.

It’s not hard to see how even those who were once enthusiastic about computers can end up resenting them and disliking what they have become.

Friday, 18 November 2016

Localizing our noCloud slogan

English Planet – Dreierlei | 15:12, Friday, 18 November 2016

there is noCloud

At FSFE we have been asked many times to come up with translations of our popular “There is no CLOUD, just other’s peoples computers” slogan. This week we started the localization by asking our translator team and have been very surprised to see they already come up with translations in 16 different languages.

In addition, our current trainee Olga Gkotsopoulou and asked her international network and we asked on twitter for additional translations. And, what can I say? crowdsourcing seldom felt so appealing. In two hours we got 8 more translations and after 24 hours we already had 30 translations.

The quickness in that we got so many translations shows us that the slogan is indeed at the pulse of time. People are happy to translate it because they love to send this message out. At the time of writing we now have 36 translations and two dialects on our wiki-page:

[AR] لا يوجد غيم, هناك أخرين كمبيوتر
[BR] N’eus Cloud ebet. Urzhiataerioù tud all nemetken.
[CAT] No hi ha cap núvol, només ordinadors d’altres persones.
[DA] Der findes ingen sky, kun andre menneskers computere.
[DE] Es gibt keine Cloud, nur die Computer anderer Leute.
[EL] Δεν υπάρχει Cloud, μόνο υπολογιστές άλλων.
[EU] Ez dago lainorik, beste pertsona batzuen ordenagailuak baino ez.
[EO] Nubon ne ekzistas sed fremdaj komputiloj.
[ES] La nube no existe, son ordenadores de otras personas.
[ET] Pole mingit pilve, on vaid teiste inimeste arvutid.
[FA] فضای ابری در کار نیست، تنها رایانه های دیگران
[FR] Il n’y a pas de cloud, juste l’ordinateur d’un autre.
[FI] Ei ole pilveä, vain toisten ihmisten tietokoneita.
[GL] A nube non existe, só son ordenadores doutras persoas.
[GA] Níl aon néal ann, níl ann ach ríomhairí daoine eile.
[HE] אין ענן, רק מחשבים של אנשים אחרים
[HY] Չկա ամպ, կա պարզապես այլ մարդկանց համակարգիչներ
[IT] Il cloud non esiste, sono solo i computer di qualcun altro
[JP] クラウドはありません。 他の人のコンピュータだけがあります。
[KA] არ არის საწყობი ,მხოლოდ ბევრი ეტიკეტებია ხალხში სხვადასხვა ენებზე
[KL] Una pujoqanngilaq – qarasaasiat allat kisimik
[KO] 구름은 없다. 다른 사람의 컴퓨터일뿐.
[LB] Et gëtt keng Cloud, just anere Leit hier Computeren.
[NL] De cloud bestaat niet, alleen computers van anderen
[TR] Bulut diye bir şey yok, sadece başkalarının bilgisayarları var.
[TL] walang ulap kundi mga kompyuter ng ibang tao
[PL] Nie ma chmury, są tylko komputery innych.
[PT] Não há nuvem nenhuma, há apenas computadores de outras pessoas.
[RO] Nu există nici un nor, doar calculatoarele altor oameni.
[RS] Ne postoji Cloud, već samo računari drugih ljudi.
[RU] Облака нет, есть чужие компьютеры.
[SQ] S’ka cloud, thjesht kompjutera personash të tjerë.
[SV] Det finns inget moln, bara andra människors datorer.
[UR] کلاوڈ سرور کچھ نہیی، بس کسی اور کاکمپیوٹر۔
[VI] Không có Đám mây, chỉ có những chiếc máy tính của kẻ khác.
[ZH] 没有云,只有人们的电脑.

And again: If you miss your language or dialect, add it to the wiki, leave it as a comment or write me a message and I will add it.

Tuesday, 15 November 2016

There is no Free Software company - But!

Matthias Kirschner's Web log - fsfe | 09:22, Tuesday, 15 November 2016

Since the start of the FSFE 15 years ago, the people involved were certain that companies are a crucial part to reach our goal of software freedom. For many years we have explained to companies – IT as well as non-IT – what benefits they have from Free Software. We encourage individuals and companies to pay for Free Software, as much as we encourage companies to use Free Software in their offers.

A factory building

While more people demanded Free Software, we also saw more companies claiming something is Free Software or Open Source Software although it is not. This behaviour – also called "openwashing" is nothing special for Free Software, some companies also claim something is "organic" or "fair-trade" although it is not. As the attempts to get a trademark for "Open Source" failed, it is difficult to legally prevent companies from calling something "Free Software" or "Open Source Software" although it does neither comply with the Free Software definition by the Free Software Foundation nor with the Open Source definition by the Open Source Initiative.

When the FSFE was founded in 2001 there was already the idea to encourage and support companies making money with Free Software by starting a "GNU business network". One of the stumbling blocks for that was always the definition of a Free Software company. It cannot just be the usage of Free Software or the contribution to Free Software, but also needs to include what rights they are offering their customers. Another factor was whether the revenue stream is tied to proprietary licensing conditions. Would we also allow a small revenue from proprietary software, and how high is that that you can still consider it a Free Software company?

It turned out to be a very complicated issue, and although we were regularly discussing it we did not have an idea how to approach the problems in defining a Free Software company.

During our last meeting of the FSFE's General Assembly – triggered by our new member Mirko Böhm – we came to the conclusion that there was a flaw in our thinking and that it does not make sense to think about "Free Software companies". In hindsight it might look obvious, but for me the discussion was an eye opener, and I have the feeling that was a huge step for software freedom.

As a side note: When we have the official general assembly of the FSFE we always use this opportunity to have more discussions during the days before or after. Sometimes they focus on internal topics, organisational changes, but often there is brainstorming about the "hot topics of software freedom" and where the FSFE has to engage in the long run. At this year's meeting, from 7 to 9 October, inspired by Georg Greve's and Nicolas Dietrich's input, we spent the whole Saturday thinking about the long term challenges for software freedom with the focus on the private sector.

We talked about the challenges of software freedom presented by economies of scale, networking effects, investment preference, and users making convenience and price based decisions over values – even when they declare themselves value conscious.

One problem preventing a wider spread of software freedom identified there was that Free Software is being undermined by companies that abuse the positive brand recognition of Free Software / Open Source by "openwashing" themselves. Sometimes they offer products that do not even have a Free Software version. This penalises companies and groups that aim to work within the principles of Free Software and damages the recognition of Free Software / Open Source in the market. The consequence is reduced confidence in Free Software, fewer developers working on it, fewer companies providing it, and less Free Software being written in favour of proprietary models.

In the discussion, one question kept arising. Is an activity that is good for Free Software which is done by one small company as their sole activity more valuable than if the same thing were done as part of a larger enterprise? We all agree that a small company which is using and distributing exclusively Free Software, and has done so for many years, and no part of the software they wrote or included was ever non-free software is good. But what happens if said small, focused company got purchased by a larger entity? Does that invalidate the benefit of what is being done?

We concluded that good action remains good action, and that the FSFE should encourage good actions. So instead of focusing on the company as such we should focus on the activity itself; we should think about "Free Software business activities", "Free Software business offers", and such. My feeling was that this was the moment the penny had dropped, while others and me realised the flaw in our previous thinking. We need action oriented approaches and we need to look at activities individually.

There was still the question where to draw the line between acceptable or useful activities and harmful ones. This is not a black and white issue, and when assessing the impact for software freedom there are different levels. For example if you evaluate a sharing platform, you might find out that the core is Free Software, but the sharing module itself is proprietary. This is a bad offer if you want to run a competing sharing platform using Free Software.

The counter example of an acceptable offer was a collaboration software that was useful and complete, but where connecting a proprietary client would itself require a proprietary connector. It was also discussed that sometimes you need to interface with proprietary systems through proprietary libraries that do not allow connecting with Free Software unless one were to first replace the entire API/library itself.

Ultimately a consensus emerged around a focus on the four freedoms of Free Software in relation to the question of whether the software is sufficiently complete and useful to run a competing business.

One thought was to run "test cases" to evaluate how good an offer is on the Free Software scale. Something like a regular bulletin about best and worst practice. We could look at a business activities and study it according to the criteria below, evaluate it, making that evaluation and its conclusions public. That way we can help to build customer awareness about software freedom. Here is a first idea for a scale:

  • EXCELLENT: Free Software only and on all levels, no exceptions.

  • GOOD: Free Software as a complete, useful, and fully supportable product. Support available for Free Software version.

  • ACCEPTABLE: Proprietary interfaces to proprietary systems and applications, especially complex systems that require complex APIs/libraries/SDKs, as long as the above is still met.

  • BAD: Essential / important functionality only available proprietary, critical functionality missing from Free Software (one example for an essential functionality was LDAP connector).

  • EVIL: Fully proprietary, but claiming to be Free Software / Open Source Software.

Now I would like to know from you: what is your first reaction on this? Would you like to add something? Do you have ideas what should be included in a checklist for such a test? Would you be interested to help us to evaluate how good some offers are on such a scale?

To summarise, I believe it was a mistake to think about businesses as a whole before and that if we want to take the next big steps we should think about Free Software business offers / activities – at least until we have a better name for what I described above. We should help companies that they are not deluded by people just claiming something is Free Software, but give them the tools to check themselves.

PS: Thank you very much to the participants at the FSFE meeting, especially Georg Greve for pushing this topic and internally summarising our discussion, and Mirko Böhm who's contribution was the trigger in the discussion for realising our previous flaw in thinking.

Sunday, 13 November 2016

Build FSFE websites locally

English – Max's weblog | 23:00, Sunday, 13 November 2016

Those who create, edit, and translate FSFE websites already know that the source files are XHTML files which are build with a XSLT processor, including a lot of custom stuff. One of the huge advantages from that is that we don’t have to rely on dynamic website processors and databases, on the other hand there are a few drawbacks as well: websites need a few minutes to be generated by the central build system, and it’s quite easy to mess up with the XML syntax. Now if an editor wants to create or edit a page, she needs to wait a few minutes until the build system has finished everytime she wants to test how the website looks like. So in this guide I will show how to build single websites on your own computer in a fraction of the FSFE’s system build time, so you’ll only need to commit your changes as soon as the file looks as you want it. All you need is a bit hard disk space and around one hour time to set up everything.

The whole idea is based on what FSFE’s webmaster Paul Hänsch has coded and written. On his blog he explains the new build script. He explains how to build files locally, too. However, this guide aims to make it a bit easier and more verbose.

Before we’re getting started, let me shortly explain the concept of what we’ll be doing. Basically, we’ll have three directories: trunk, status, and Most likely you already have trunk, it’s a clone of the FSFE’s main SVN repository, and the source of all operations. All those files in there have to be compiled to generate the final HTML files we can browse. The location of these finished files will be status, the third directory, contains error messages and temporary files.

After we (1) created these directories, partly by downloading a repository with some useful scripts and configuration files, we’ll (2) build the whole FSFE website on our own computer. In the next step, we’ll (3) set up a local webserver so you can actually browse these files. And lastly we’ll (4) set up a small script which you can use to quickly build single XHTML files. Last but not least I’ll give some real-world examples.

1. Clone helper repository

Firstly, clone a git repository which will give you most needed files and directories for the further operations. It has been created by me and contains configuration files and the script that will make building of single files easier. Of course, you can also do everything manually.

In general, this is the directory structure I propose. In the following I’ll stick to this scheme. Please adapt all changes if your folder tree looks differently.

trunk (~700 MB):      ~/subversion/fsfe/fsfe-web/trunk/
status (~150 MB):     ~/subversion/fsfe/local-build/status/ (~1000 MB):  ~/subversion/fsfe/local-build/

(For those not so familiar with the GNU/Linux terminal: ~ is the short version of your home directory, so for example /home/user. ~/subversion is the same as /home/USER/subversion, given that your username is USER)

To continue, you have to have git installed on your computer (sudo apt-get install git). Then, please execute via terminal following command. It will copy the files from my git repository to your computer and already contains the folders status and

git clone ~/subversion/fsfe/local-build

Now we take care of trunk. In case you already have a copy of trunk on your computer, you can use this location, but please do a svn up beforehand and be sure that the output of svn status is empty (so no new or modified files on your side). If you don’t have trunk yet, download the repository to the proposed location:

svn --username $YourFSFEUsername co ~/subversion/fsfe/fsfe-web/trunk

2. Build full website

Now we have to build the whole FSFE website locally. This will take a longer time but we’ll only have to do it once. Later, you’ll just build single files and not >14000 as we do now.

But first, we have to install a few applications which are needed by the build script (Warning: it’s possible your system lacks some other required applications which were already installed on mine. If you encounter any command not found errors, please report them in the comments or by mail). So let’s install them via the terminal:

sudo apt-get install make libxslt

Note: libxslt may have a different name in your distribution, e.g. libxslt1.1 or libxslt2.

Now we can start building.The full website build can be started with

~/subversion/fsfe/fsfe-web/trunk/build/ --statusdir ~/subversion/fsfe/local-build/status/ build_into ~/subversion/fsfe/local-build/

See? We use the build routine from trunk to launch building trunk. All status messages are written to status, and the final website will reside in Mind differing directory names if you have another structure than I do. This process will take a long time, depending on your CPU power. Don’t be afraid of strange messages and massive walls of text ;-)

After the long process has finished, navigate to the trunk directory and execute svn status. You may see a few files which are new:

max@bistromath ~/s/f/f/trunk> svn status
?       about/printable/archive/printable.en.xml
?       d_day.en.xml
?       d_month.en.xml
?       d_year.en.xml
?       localmenuinfo.en.xml

These are leftover from the full website build. Because trunk is supposed to be your productive source directory where you also make commits to the FSFE SVN, let’s delete these files. You won’t need them anymore.

rm about/printable/archive/printable.en.xml d_day.en.xml d_month.en.xml d_year.en.xml localmenuinfo.en.xml
rm tools/tagmaps/*.map

Afterwards, the output of svn status should be empty again. It is? Fine, let’s go on! If not, please also remove those files (and tell me which files were missing).

3. Set up local webserver

After the full build is completed, you can install a local webserver. This is necessary to actually display the locally built files in your browser. In this example, I assume you don’t already have a webserver installed, and that you’re using a Debian-based operating system. So let’s install lighttpd which is a thin and fast webserver, plus gamin which lighttpd needs in some setups:

sudo apt-get install lighttpd gamin

To make Lighttpd running properly we need a configuration file. This has to point the webserver to show files in the directory. You already downloaded my recommended config file (lighttpd-fsfe.conf.sample) by cloning the git repository. But you’ll have to modify the path accordingly and rename it. So rename the file to lighttpd-fsfe.conf, open it and change following line to match the actual and absolute path of the directory (~ does not work here):

server.document-root = "/home/USER/subversion/fsfe/local-build/"

Now you can test whether the webserver is correctly configured. To start a temporary webserver process, execute the next command in the terminal:

lighttpd -Df ~/subversion/fsfe/local-build/lighttpd-fsfe.conf

Until you press Ctrl+C, you should be able to open your local FSFE website in any browser using the URL http://localhost:5080. For example, open the URL http://localhost:5080/contribute/contribute.en.html in your browser. You should see basically the same website as the original website. If not, double-check the paths, if the lighttpd process is still running, or if the full website build is already finished.

4. Single page build script

Until now, you didn’t see much more than you can see on the original website. But in this step, we’ll configure and start using a Bash script ( I’ve written to make a preview of a locally edited XHTML file as comfortable as possible. You already downloaded it by cloning the repository.

First, rename and edit the script’s configuration file config.cfg.sample. Rename it to config.cfg and open it. The file contains all paths we already used here, so please adapt them to your structure if necessary. Normally, it should be sufficient to modify the values for LOC_trunk (trunk directory) and LOC_out ( directory), the rest can be left with the default values.

Another feature of the fsfe-preview is to automatically check the XML syntax of the files. For this, libxml2-utils has to be installed which contains xmllint. Please execute:

sudo apt-get install libxml2-utils

Now let’s make the script easy to access via the terminal for future usage. For this, we’ll create a short link to the script from one of the binary path directories. Type in the terminal:

sudo ln -s ~/subversion/fsfe/local-build/ /usr/bin/fsfe-preview

From this moment on, you should be able to call fsfe-preview from anywhere in your terminal. Let’s make a test run. Modify the XHTML source file contribute/contribute.en.xhtml and edit some obvious text or alter the title. Now do:

fsfe-preview ~/subversion/fsfe/fsfe-web/trunk/contribute/contribute.en.xhtml

As output, you should see something like:

[INFO] Using file /home/max/subversion/fsfe/fsfe-web/trunk/contribute/contribute.en.xhtml as source...
[INFO] XHTML file detected. Going to build into /home/max/subversion/fsfe/local-build/ ...
[INFO] Starting webserver

[SUCCESS] Finished. File can be viewed at http://localhost:5080/contribute/contribute.en.html

Now open the mentioned URL http://localhost:5080/contribute/contribute.en.html and take a look whether your changes had an effect.

Recommended workflows

In this section I’ll present a few of the cases you might face and how to solve them with the script. I presume you have your terminal opened in the trunk directory.

Preview a single file

To preview a single file before uploading it, just edit it locally. The file has to be located in the trunk directory, so I suggest to only use one SVN trunk on your computer. It makes almost no sense to store your edited files in different folders. To preview it, just give the path to the edited file as argument for fsfe-preview, just as we did in the preceding step:

fsfe-preview activities/radiodirective/statement.en.xhtml

The script detects whether the file has to be built with the XSLT processor (.xhtml files), or if it just can be copied to the website without any modification (e.g. images).

Copy many files at once

Beware that all files you added in your session have to be processed with the script. For example, if you create a report with many images included and want to preview it, you will have to copy all these images to the output directory as well, and not only the XHTML file. For this, there is the –copy argument. This circumvents the whole XSLT build process and just plainly copies the given files (or folders). In this example, the workflow could look like the following: The first line copies some images, the second builds the corresponding XHTML file which makes use of these images:

fsfe-preview --copy news/2016/graphics/report1.png news/2016/graphics/report2.jpg
fsfe-preview news/2016/news-20161231-01.en.xhtml

Syntax check

In general, it’s good to check the XHTML syntax before editing and commiting files to the SVN. The script fsfe-preview already contains these checks but it’s good to be able to use it anyway. If you didn’t already do it before, install libxml2-utils on your computer. It contains xmllint, a syntax checker for XML files. You can use it like this:

xmllint --noout work.en.xhtml

If there’s no output (–noout), the file has a correct syntax and you’re ready to continue. But you may also see something like

work.en.xhtml:55: parser error : Opening and ending tag mismatch: p line 41 and li

In this case, this means that the <p> tag starting in line 41 isn’t closed properly.


The presented process and script has a few drawbacks. For example you aren’t able to preview certain very dynamic pages or parts of pages, or those depending on CGI scripts. In most cases you’ll never encounter these, but if you’re getting active with the FSFE’s webmaster team it may happen that you’ll have to fall back on the standard central build system.

Any other issues? Feel free to report them as they will help to improve FSFE’s editors to work more efficiently :-)


29 November 2016: Jonas has pointed out a few bugs and issues with a different GNU/Linux distribution. Should be resolved.

Are all victims of French terrorism equal? - fsfe | 10:50, Sunday, 13 November 2016

Some personal observations about the terrorist atrocities around the world based on evidence from Wikipedia and other sources

The year 2015 saw a series of distressing terrorist attacks in France. 2015 was also the 30th anniversary of the French Government's bombing of a civilian ship at port in New Zealand, murdering a photographer who was on board at the time. This horrendous crime has been chronicled in various movies including The Rainbow Warrior Conspiracy (1989) and The Rainbow Warrior (1993).

The Paris attacks are a source of great anxiety for the people of France but they are also an attack on Europe and all civilized humanity as well. Rather than using them to channel more anger towards Muslims and Arabs with another extended (yet ineffective) state of emergency, isn't it about time that France moved on from the evils of its colonial past and "drains the swamp" where unrepentant villains are thriving in its security services?

François Hollande and Ségolène Royal. Royal's brother Gérard Royal allegedly planted the bomb in the terrorist mission to New Zealand. It is ironic that Royal is now Minister for Ecology while her brother sank the Greenpeace flagship. If François and Ségolène had married (they have four children together), would Gérard be the president's brother-in-law or terrorist-in-law?

The question has to be asked: if it looks like terrorism, if it smells like terrorism, if the victim of that French Government attrocity is as dead as the victims of Islamic militants littered across the floor of the Bataclan, shouldn't it also be considered an act of terrorism?

If it was not an act of terrorism, then what is it that makes it differ? Why do French officials refer to it as nothing more than "a serious error", the term used by Prime Minister Manuel Valls during a recent visit to New Zealand in 2016? Was it that the French officials felt it was necessary for Liberté, égalité, fraternité? Or is it just a limitation of the English language that we only have one word for terrorism, while French officials have a different word for such acts carried out by those who serve their flag?

If the French government are sincere in their apology, why have they avoided releasing key facts about the atrocity, like who thought up this plot and who gave the orders? Did the soldiers involved volunteer for a mission with the code name Opération Satanique, or did any other members of their unit quit rather than have such a horrendous crime on their conscience? What does that say about the people who carried out the orders?

If somebody apprehended one of these rogue employees of the French Government today, would they be rewarded with France's highest honour, like those tourists who recently apprehended an Islamic terrorist on a high-speed train?

If terrorism is such an absolute evil, why was it so easy for the officials involved to progress with their careers? Would an ex-member of an Islamic terrorist group be able to subsequently obtain US residence and employment as easily as the French terror squad's commander Louis-Pierre Dillais?

When you consider the comments made by Donald Trump recently, the threats of violence and physical aggression against just about anybody he doesn't agree with, is this the type of diplomacy that the US will practice under his rule commencing in 2017? Are people like this motivated by a genuine concern for peace and security, or are these simply criminal acts of vengence backed by political leaders with the maturity of schoolyard bullies?

Wednesday, 09 November 2016

OpenRheinRuhr 2016 – A report of iron and freedom

English – Max's weblog | 21:55, Wednesday, 09 November 2016


Our Dutch iron fighters

Last weekend, I visited Oberhausen to participate in OpenRheinRuhr, a well-known Free Software event in north-western Germany. Over two days I was part of FSFE’s booth team, gave a talk, and enjoyed talking to tons of like-minded people about politics, technology and other stuff. In the next few minutes you will learn what coat hangers have to do with flat irons and which hotel you shouldn’t book if you plan to visit Oberhausen.

On Friday, Matthias, Erik, and I arrived at the event location which normally is a museum collecting memories of heavy industries in the Ruhr area: old machines, the history and background of industry workers, and pictures of people fighting for their rights. Because we arrived a bit too early we helped the (fantastic!) orga team with some remaining work in the exhibition hall before setting up FSFE’s booth. While doing so, we already sold the first tshirt and baby romper (is this a new record?) and had nice talks. Afterwards we enjoyed a free evening and prepared for the next busy day.

But Matthias and I faced a bad suprised: our hotel rooms were build for midgets and lacked a few basic features. For example, Matthias‘ room had no heating, and in my bathroom someone has stolen the shelf. At least I’ve been given a bedside lamp – except the little fact that the architect forgot to install a socket nearby. Another (semi-)funny bug were the emergency exits in front of our doors: by escaping from dangers inside the hotel, taking these exits won’t rescue you but instead increase the probability of dying from severe bone fractures. So if you ever need a hotel in Oberhausen, try to avoid City Lounge Hotel by any means. Pictures at the end of this article.


The large catering and museum hall

On Saturday, André Ockers (NL translation coordinator), Maurice Verhessen (Coordinator Netherlands) and Niko Rikken from the Netherlands joined us to help at the booth and connect with people. Amusingly we again learnt that communication can be very confusing the shorter it is. While Matthias thought that he asked Maurice to bring a iron cloth hanger, Maurice thought he should bring a flat iron. Because he only had one (surprisingly), he asked Niko to bring his as well. While we wondered why Maurice only has one cloth hanger, our Dutch friends wondered why we would need two flat irons ;-)

Over the day, Matthias, Erik, and I gave our talks: Matthias spoke about common misconceptions about Free Software and how to clear them up, Erik explained how people can synchronise their computers and mobile phones with Free Software applications, and I motivated people to become politically active by presenting some lessons learned from my experiences with the Compulsory Routers and Radio Lockdown cases. There were many other talks by FSFE people, for example by Wolf-Dieter and Wolfgang. In the evening we enjoyed the social event with barbecue, free beer, and loooooong waiting queues.

Sunday was far more relaxed than the day before. We had time to talk to more people interested in Free Software and exchanged ideas and thoughts with friends from other initiatives. Among many others, I spoke with people from Freifunk, a Pirate Party politician, a tax consultant with digital ambitions, two system administrators, and a trade unionist. But even the nicest day has to end, and after we packed up the whole booth, merchandise and promotion material again, André took the remainders to the Netherlands where they will be presented to the public at FSFE’s T-DOSE booth.

Look from my hotel room door Yes, that's all Disabled bedside lamp There has been something... This emergency exit... ...doesn't lead to safety

Understanding what lies behind Trump and Brexit - fsfe | 08:23, Wednesday, 09 November 2016

As the US elections finish, many people are scratching their heads wondering what it all means. For example, is Trump serious about the things he has been saying, or is he simply saying whatever was most likely to make a whole bunch of really stupid people crawl out from under their rocks to vote for him? Was he serious about winning at all, or was it just the ultimate reality TV experiment? Will he show up for work in 2017, or like Australia's billionaire Clive Palmer, will he set a new absence record for an elected official? Ironically, Palmer and Trump have both been dogged by questions over their business dealings, will Palmer's descent towards bankruptcy be replicated in the ongoing fraud trial against Trump University and similar scandals?

While the answer to those questions may not be clear for some time, some interesting observations can be made at this point.

The world has been going racist. In the UK, for example, authorities have started putting up anti-Muslim posters with an eery resemblance to Hitler's anti-Jew propaganda. It makes you wonder if the Brexit result was really the "will of the people", or were the people deliberately whipped up into a state of irrational fear by a bunch of thugs seeking political power?

Who thought The Man in the High Castle was fiction?

In January 2015, a pilot of The Man in the High Castle, telling the story of a dystopian alternative history where Hitler has conquered America, was the most-watched original series on Amazon Prime.

It appears Trump supporters have already been operating US checkpoints abroad for some time, achieving widespread notoriety when they blocked a family of British Muslims from visiting Disneyland in 2015. Ambushing them at the last moment as they were about to board their flight, it is unthinkable how anybody could be so cruel. When you reflect on statements made by Trump and the so-called "security" practices around the world, this would appear to be only a taste of things to come though.

Is it a coincidence that Brexit and Trump both happened in the same year that the copyright on Mein Kampf expired? Ironically, in the chapter on immigration Hitler specifically singles out the U.S.A. for his praise, is that the sort of rave review that Trump aspires to when he talks about making America great again?

US voters have traditionally held concerns about the power of the establishment. The US Federal Reserve has been in the news almost every week since the financial crisis, but did you know that the very concept of central banking was thrown out the window four times in America's history? Is Trump the type of hardliner who will go down this path again, or will it be business as usual? In his book Rich Dad's Guide to Investing in Gold & Silver, Robert Kiyosaki and Michael Maloney encourage people to consider putting most of their wealth into gold and silver bullion. Whether you like the politics of Trump and Brexit or not, are we entering an era where it will be prudent for people to keep at least ten percent of net wealth in this asset class again? Online dealers like BullionVault in Europe already appear to be struggling under the pressure as people rush to claim the free grams of bullion credited to newly opened accounts.

The Facebook effect

In recent times, there has been significant attention on the question of how Facebook and Google can influence elections, some European authorities have even issued alerts comparing this threat to terrorism. Yet in the US election, it was simple email that stole the limelight (or conveniently diverted attention from other threats), first with Clinton's private email server and later with Wikileaks exposing the entire email history of Clinton's chief of staff. The Podesta emails, while being boring for outsiders, are potentially far more damaging as they undermine the morale of Clinton's grass roots supporters. These people are essential for knocking on doors and distributing leaflets in the final phase of an election campaign, but after reading about Clinton's close relationship with big business, many of them may well have chosen to stay home. Will future political candidates seek to improve their technical competance, or will they simply be replaced by candidates who are born hackers and fluent in the language of a digital world?

Monday, 07 November 2016

Quickstart SDR with gqrx, GNU Radio and the RTL-SDR dongle - fsfe | 19:56, Monday, 07 November 2016

Software Defined Radio (SDR) provides many opportunities for both experimentation and solving real-world problems. It is not exactly a new technology but it has become significantly more accessible due to the increases in desktop computing power (for performing the DSP functions) and simultaneous reduction in the cost of SDR hardware.

Thanks to the availability of a completely packaged gqrx and GNU Radio solution, you can now get up and running in less than half an hour and spending less than fifty dollars/pounds/euros.

We provided a full demo of the Debian Hams gqrx solution at Mini DebConf Vienna (video) and hope to provide a similar demo at MiniDebConf Cambridge on the coming weekend of 12-13 November.

gqrx is also available for Fedora users.

Choosing hardware

There are many different types of hardware, ranging from the low-cost RTL-SDR USB dongles to full duplex multi-transceiver systems.

My recommendation is to start with an RTL-SDR dongle due to extremely low cost, this will give you an opportunity to reflect on the opportunities of this technology before putting money into one of the transceivers and their accessories. The RTL-SDR dongle also benefits from being a small self-contained solution that you can carry around and experiment with or demo just about anywhere.

Important: Don't buy the cheapest generic RTL TV/radio receivers. It is absolutely essential to buy one of the units that has been explicitly promoted for SDR. These typically have a temperature compensated crystal oscillator (TCXO) which is absolutely essential for the reception of narrowband voice and digital signals. Without this, it is only possible to receive wideband broadcash FM radio and TV channels.

For those who want to try it out with us at MiniDebConf Cambridge, Technofix has UK stock (online ordering), they are about £26.

Getting gqrx up and running fast

Note: to avoid the wrong kernel module being loaded automatically, it is recoemmended that you don't connect the RTL-SDR dongle before you install the packages. If you did already connect it, you may need to reboot or rmmod dvb_usb_rtl28xxu.

If you are using a Debian jessie system, you can get all the necessary packages from jessie-backports.

If you haven't already enabled backports, you can do so with a command like this:

$ sudo echo "deb jessie-backports main" >> /etc/apt/sources.list

Make sure your local index is updated and then install the necessary packages:

$ sudo apt-get update
$ sudo apt-get install -t jessie-backports gqrx-sdr rtl-sdr

Running it for the first time

Once the packages are installed, connect the RTL-SDR dongle to the computer and then start the gqrx GUI from a terminal:

$ gqrx

If the GUI fails to appear, look carefully at the error messages. It may be that the wrong kernel module has been loaded.

The properties window appears, select the RTL-SDR dongle:

Now the main screen will appear. Choose the wideband FM mode "WFM (mono)" and change the frequency to a value in the FM broadcast band such as 100MHz. Click the "Power on" button in the top left corner, just under the "File" menu, to start reception. Click in the middle of a strong signal to tune to that station. If you don't hear anything, check the squelch setting (it should be more negative than the signal strength value) and increase the Gain control at the bottom right hand side of the window.

Looking for ham / amateur radio signals

A popular band for hams is between 144 - 148 MHz (in some countries only a subset of this band is used). This is referred to as the two-meter band, as that is the wavelength at this frequency.

Hams often use the narrowband FM mode in this band, especially with repeater stations. Change the "Mode" setting from "WFM" to "Narrow FM" and change the frequency to a value in the middle of the band. Look for signals in the radio spectrum and click on them to hear them.

If you are not sure which part of the band to look in, search for the two-meter band plan for your country/region and look for the repeater output frequencies in the band plan.

Sunday, 06 November 2016

KVM virtualization with Allwinner A20 on Debian: libre, low-power, low-cost

Daniel's FSFE blog | 14:33, Sunday, 06 November 2016


Various cheap ARM boards based on the Allwinner A20 SoC are available already for a few years. The first EOMA68 computer [1] will be also based on this chipset. Not many users know that the Allwinner A20 supports hardware-supported virtualization as well. Its Cortex A7 cores allow running hardware-accelerated ARM virtual machines (guests) using KVM or Xen.

While Allwinner has been blamed to violate the GPL for years [2], their A20 SoC is imho one of the best choices today when it comes to building a small and libre server for SOHO use (thanks to the hard work of the Allwiner-independent Linux-Sunxi community). While many SoCs found on popular boards like those from the Raspberry Pi family require proprietary blobs, the A20 works with a free bootloader and requires no proprietary drivers or firmware for basic operation.

The virtualization on A20 hosts works out of the box on Debian Jessie with the stock kernel and official packages in main — without cross-compiling, patching or other tinkering (this was not the case in the past, see [3]). This also means that updating your host and guests later will be easy and painless. Creating and managing guests can be done with virt-manager [4] – a secure and comfortable graphical user interface licensed under GPLv3.

After first discussing some A20 hardware options, this guide takes the example of the Olimex “A20-OLinuXIno-LIME2″ board [5] and shows how to turn it into a virtualization host. Then shows how create and manage guest-VMs on the virtualization host. The guide assumes that you are running a a GNU/Linux-based desktop system from which you want to manage the A20 device.


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Hardware choices

There are plenty of boards with the Allwinner A20. However, only few are known to work out of the box on Debian Jessie. The particular page on the Debian Wiki [6] mentions the following boards in particular:

  • Cubietech Cubieboard2
  • Cubietech Cubieboard3 (“Cubietruck”)
  • LeMaker Banana Pro
  • Olimex A20-OLinuXino-LIME
  • Olimex A20-OLinuXino-LIME2 (only the regular one, not the eMMC variant!)
  • Olimex A20-Olinuxino Micro

While some of these boards feature Gigabit ethernet and SATA, only the Cubieboard 3 has 2 GB of RAM. To me, this seems to be the best choice for a A20-based KVM virtualization host. Since I only had a spare Olimex A20-OLinuXino-LIME2 board at hand, this guide uses this board as example.

Beware: The “A20-OLinuXino-LIME2″ and the “A20-OLinuXino-LIME2-eMMC” are not the same! Debian provides no firmware for the “A20-OLinuXino-LIME2-eMMC” and I could not get it to work at all on Debian. Although I thought that they would be the same except for the eMMC flashg, the firmware for the regular “”A20-OLinuXino-LIME2″ did NOT work for me at all!

Base installation

The article in the Debian wiki provides the necessary information on installing Debian Jessie using the text-based Debian-Installer. Make sure you have a microSD card with a good 4K random I/O performance or the installation will take forever and your A20 system will run terribly slow afterwards (see my article comparing performance of various microSD cards).

If you don’t have a serial cable and want to install using the HDMI output, you need to use the installer images from unstable. The easiest way to do is to fetch the firmware file from unstable and the partition image from Jessie. Then write them to your microSD card (replace /dev/sdX with your particular device):

$ zcat firmware.A20-OLinuXino-Lime2.img.gz partition.img.gz > /dev/sdX

Next, insert the microSD card into your device, connect your device to your LAN and power it up. Then install Debian as usual using the text-based installer. During the installation, sure to create a root account (needed for KVM) and a ext2 boot partition (the safest method here is to use the guided installer). When tasksel gets called, make sure to install the tasks/packages “SSH Server” and “Standard system utilities”.

Note for users of the German mirrors: Using the mirror “” will break your installation as something seems to be missing there as of 2016-11-05. Using “” works fine.

Installing the KVM virtualization

By default, interactive root logins are not allowed on Debian. Therefore, make sure you copy over your SSH public key to your a20-box or simply enable interactive root logins over SSH by changing the following option in /etc/ssh/sshd_config:

#PermitRootLogin without-password
PermitRootLogin yes

Then restart the SSH server:

# service ssh restart

Now you should be able to log in directly as root. Next, install the virtualization packages:

# apt install libvirt-daemon-system
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:


0 upgraded, 105 newly installed, 0 to remove and 0 not upgraded.
Need to get 44.4 MB of archives.
After this operation, 182 MB of additional disk space will be used.
Do you want to continue? [Y/n]

Now fire up virt-manager on your desktop and make sure you can connect to your a20-box:

Creating and installing a guest

For running ARM virtual machines you need a kernel and DTBs which support the VExpress-A15 chipset (the ARM reference board usually emulated on ARM). This is already provided in stock Debian, so there is no need to compile anything yourself.

Regarding the guest, you can choose any Linux you want. In the following example, we will install a Debian Jessie guest using the Debian installer. Therefore we need to download the to the Virtualization host. This time, we don’t need a partition image but can use the usual the initrd installer-Image from the Debian server. SSH into the virtualization host and download it:

wget -O initrd-installer-jessie.gz

For the installation, you will also need a different kernel because in the Kernel installed on the host the network drivers are in initrd, but the Installer’s initrd assumes they are in the kernel. Therefore, fetch a kernel for the installer:

wget -O vmlinuz-installer-jessie

Now, fire up virt-manager on your desktop and connect to the Virtualization host. Then, start the wizard for creating guests using “create new virtual machine”. On the first screen, change the machine type to “vexpress-a15″:

On the next screen, specify a storage (just create one using the dialog following “Browse”), and also use “Browse” to locate the kernel and initrd images so you specify the ones we just downloaded. For the DTB, we’ll use the one that is part of Debian’s stock kernel and resides under /usr/lib/linux-image-3.16.0-4-armmp-armmp-lpae/vexpress-v2p-ca15-tc1.dtb (make sure it corresponds to the version on your a20-host! TODO: Is there any symlink which points to the current version?)). The kernel args are also very important, or you will not get any output. For this line, specify the following:

root=/dev/vda1 console=ttyAMA0,115200 rootwait

Finally, select OS type and version appropiately. Your dialog should look like this:

Then, specify RAM (e.g. 256MB) and the number of CPUs (e.g. 1) you want to give the guest and jump to the last screen. Here, give your guest a nice name and make sure you check the “Customize configuration before install” checkbox before you click “Finish”:

Otherwise, you would end up with an error message like this:

Unable to complete install: 'internal error: early end of file from monitor: possible problem:
kvm_init_vcpu failed: Invalid argument

In the configuration of the VM, under “Processor”, change the configuration from “Hypervisor Default” to “Application Default”:

To get better performance, also change the BUS of your virtual disk to “VIRTIO” (by default, it would emulate an SD card):

And do the same for the network adapter:

Finally, fire up the guest using “Begin installation”. If everything goes fine, you should see the kernel boot and be presented with the welcome screen of the installer. For jessie, it should look like this:

If you selected the kernel and initrd from stretch/sid you should get a nicer color screen (make sure you set the baudrate of the console to 115200 or you will get a disorted output!):

When partitioning the guest, just create a single root partition spanning the whole (virtual) device. The guest will always boot using externally specified kernels, dtbs and initrds, therefore there is no use in creating a /boot partition as the “guided install” would do.

Near the end of the installation, you will be notified that no bootloader could be installed. You can safely ignore this message:

After finishing the installation, the system will boot again into the installer because the initrd is still active. To change this, power off the guest (“Force Off”) and specify in the boot options to use the kernel and initrd image of your A20 host instead (whenever they will be updated on the host, the guests will also get the update on their next boot):

Now your guest should finally succeed to boot up:

And you can check that it indeed uses the current A20 kernel on the host and virtualizes the VExpress15 SoC:


Finally, I want to provide some benchmarks so you can get a feeling about the impact of the virtualization. The benchmarks were done using a guest with 2 CPUs and 512MB memory assigned.


For a first I/O benchmark, I used hdparm.

On the host:

$ hdparm -tT /dev/mmcblk0
 Timing cached reads:   814 MB in  2.00 seconds = 406.33 MB/sec
 Timing buffered disk reads:  66 MB in  3.01 seconds =  21.93 MB/sec

On the guest:

$ hdparm -tT /dev/vda
 Timing cached reads:   694 MB in  2.00 seconds = 346.49 MB/sec
 Timing buffered disk reads:  30 MB in  3.15 seconds =   9.52 MB/sec

CPU processing

For benchmarking processing, I used the openssl suite to do a few simple AES benchmarks:

$ openssl speed aes

On the host:

The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128 cbc      20267.83k    22390.70k    23325.10k    23575.89k    23642.11k
aes-192 cbc      17594.13k    19464.20k    19956.57k    20102.83k    20146.86k
aes-256 cbc      15727.25k    17158.89k    17592.58k    17706.67k    17738.41k

On the guest:

The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128 cbc      19784.01k    22100.48k    22697.56k    23272.20k    23288.29k
aes-192 cbc      17363.72k    19097.02k    19643.68k    19786.41k    19800.53k
aes-256 cbc      15455.28k    16939.28k    17374.44k    17415.85k    17504.58k


With one of the Allwinner A20 boards supported by Debian, you can easily build a tiny virtualization host that can handle a few simple VMs and draws only 2-3W of power. While this process was pretty cumbersome in the past (you had to cross-compile kernels etc.), thanks to the efforts of the Debian project and Linux-Sunxi community, it is now pretty straight-forward with only few caveats involved. This might also be an interesting option if you want to run a low-power virtualization cluster on fully libre software down to the firmware level.



Saturday, 05 November 2016

Benchmarking microSD cards

Daniel's FSFE blog | 18:56, Saturday, 05 November 2016


If you ever tried using a microSD for your root or home filesystem on a small computing device or smartphone, you probably have noticed that microSD cards are in most cases a lot slower than integreted eMMC flash. Since most filesystems use 4k blocks, the random write/read performance using 4k blocks is what matters most in such scenarios. And while microSD cards don’t come close to internal flash in these disciplines, there are significant differences between the models.

Jeff Geerling [1,2] has already benchmarked the performance of various microSD cards on different models of the “blobby” Raspberry Pi. I had a number of different microSD cards at hand and I tried to replicate his results on my sample.


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Environment and tools

Just like Jeff, I used the open source (but non-free) tool “iozone” [3] in the current stable version that is available on Debian Jessie (3.429). Instead of using a Raspberry Pi, I used a cheap microSD/SD-USB2.0-adapter made by Logilink [4] connected to a desktop pc.

I disabled caches and benchmarked on raw devices to avoid measuring filesystem overhead. Therefore, I used the following call to iozone to run the benchmarks (/dev/sde is my sdcard):

$ iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2 -f /dev/sde

Benchmarked microSD cards and results

The following table provides the results I obtained (rounded to 10kb/s):

Manufacturer Make / model Rating Capacity 16M seq. read (MB/s) 16M seq. write (MB/s) 4K rand. read (MB/s) 4K rand. write (MB/s)
Adata ? C4 32 GB 9.58 2.97 2.62 0.64
Samsung Evo UHS-1 32 GB 17.80 10.13 4.45 1.32
Samsung Evo+ UHS-1 32 GB 18.10 14.27 5.28 2.86
Sandisk Extreme UHS-3 64 GB 18.44 11.22 5.20 2.36
Sandisk Ultra C10 64 GB 18.16 8.69 3.73 0.80
Toshiba Exceria UHS-3 64 GB 16.10 6.60 3.82 0.09
Kingston ? C4 8 GB 14.69 3.71 3.97 0.18

Discussion, conclusion and future work

I can confirm Jeff’s results about microSD cards and would also recommend the Evo+ which has the best 4K random write performance of the sample. On the other hand, I am very disappointed about the Toshiba Exceria card. Actually running a device on this card with a very sluggish performance was the reason why I took this benchmark initiative. And indeed, after switching to the Evo+, the device feels much snappier now.

I think it would be interesting to add more cards to this benchmark (not only microSD but also regular SD cards and maybe also CF cards). Also, using fio instead of the non-free iozone might be interesting. Furthermore, doing the benchmarks internally on the device or using a faster USB 3.0 card reader might be also interesting.



Thursday, 03 November 2016

PATHspider Plugins

Iain R. Learmonth | 23:46, Thursday, 03 November 2016

This post is cross-posted on the MAMI Project blog here.

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.

For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.

PATHspider 1.0.1 has been released today and is now available from GitHub, PyPI and Debian unstable. This is a small stable update containing a documentation fix for the example plugin.

PATHspider now contains 3 built-in plugins for measuring path transparency to explicit congestion notification, DiffServ code points and TCP Fast Open. It’s easy to write your own plugins, and if they’re good enough, they may be included in the PATHspider distribution at the next feature release.

We have a GitHub repository you can fork that has a premade directory structure for new plugins. You’ll need to implement logic for performing the two connections, for the A and the B tests. Once you’ve verified your connection logic is working with Wireshark, you can move on to writing Observer functions to analyse the connections made in real time as PATHspider runs. The final step is to merge the results of the connection logic (e.g. did the operating system report a timeout?) with the results of your observer functions (e.g. was ECN successfully negotiated?) and write out the final result.

We have dedicated a section of the manual to the development of plugins and we really see plugins as first-class citizens in the PATHspider ecosystem. While future releases of PATHspider may contain new plugins, we’re also making it easier to write plugins by providing reusable library functions such as the tcp_connect() function of the SynchronisedSpider that allows for easy A/B testing of TCP connections with any globally configured state set. We also provide reusable observer functions for simple tasks such as determining if a 3-way handshake completed or if there was an ICMP unreachable message received.

If you’d like to check out PATHspider, you can find the website at

Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.

Wednesday, 02 November 2016

Backing up and restoring data on Android devices directly via USB (Howto)

Daniel's FSFE blog | 21:28, Wednesday, 02 November 2016


I was looking for a simple way to backup data on Android devices directly to a device running GNU/Linux connected over a USB cable (in my case, a desktop computer).

Is this really so unique that it’s worth writing a new article about it? Well, in my case, I did not want to buffer the data on any “intermediate” devices such as storage cards connected via microSD or USB-OTG. Also, I did not want to use any proprietary tools or formats. Instead, I wanted to store my backups in “oldschool” formats such as dd-images or tar archives. I did not find a comprehensive howto for that, so I decided to write this article.


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.


This article describes two different approaches. Both have their pros and cons:

  • Block-level: Doing 1:1 block-level backups (above the file system) is an imaging approach that corresponds to doing dd-style backups.
  • Filesystem-level: Doing filesystem-level backups (on the file system) corresponds to tar-style backups.

An important factor when doing backups is also performance. Filesystem-level backups are usually faster on block devices which are only filled up to a small degree. However, due to the file system overhead they have a lower “raw” throughput rate — especially when backing up data on flash media such as microSD cards. Here, typical filesystems such as ext4 or f2fs operating with a 4K block size are a major bottleneck as these media often have horrible 4k write/read performance.

The following instructions for applying these approaches assume that you already have a “liberated” Android device which can boot into TWRP (a free Android recovery) or CWM. I am using the example of a Nexus-S running Replicant 4.2.0004 and TWRP, but the approaches also work with most other Android distributions and recoveries.

Getting familiar with the block devices on your Android device

First of all, you should know which block device you actually want to backup. The internal flash on Android devices is usually partitioned in about 15-25 partitions, depending on the device. To get a first overview, you can try the following (I am using adb shell on the desktop):

$ adb shell cat /proc/partitions

major minor #blocks name

31 0 2048 mtdblock0
31 1 1280 mtdblock1
31 2 8192 mtdblock2
31 3 8192 mtdblock3
31 4 480768 mtdblock4
31 5 13824 mtdblock5
31 6 6912 mtdblock6
179 0 15552512 mmcblk0
179 1 524288 mmcblk0p1
179 2 1048576 mmcblk0p2
179 3 13978607 mmcblk0p3
179 16 1024 mmcblk0boot1
179 8 1024 mmcblk0boot0

To find out what the partitions are about you can inspect the directory /dev/block/platform//by-Name/ which contains symlinks to the actual partitions. In my case, the Nexus-S has two flash chips and I am listing the partitions of the one where the userdata-partition resides on:

$ adb shell ls -l /dev/block/platform/s3c-sdhci.0/by-name

lrwxrwxrwx root root 2016-11-02 19:51 media -> /dev/block/mmcblk0p3
lrwxrwxrwx root root 2016-11-02 19:51 system -> /dev/block/mmcblk0p1
lrwxrwxrwx root root 2016-11-02 19:51 userdata -> /dev/block/mmcblk0p2

Please note that unlike the Nexus-S, most newer Android devices only have a single eMMC flash chip and don't use MTD devices anymore.

Block-level approach

Block-level backups take up a lot of space (without compression) and extracting single files is cumbersome (especially when talking about encrypted data partitions or backups of the whole flash). On the other hand, "just" restoring a full backup is easy.

Backing up a single partition

Now that you know which block devices you want to backup, you can directly create a 1:1 image via adb pull as you would normally do by using dd. In our case:

$ adb pull /dev/block/platform/s3c-sdhci.0/by-name/userdata

7942 KB/s (1073741824 bytes in 132.027s)

On your workstation, you will obtain a file named userdata which contains the whole partition/filesystem as an image. If you didn't enable encryption on your Android device, you can directly mount this file as loopback device and access its contents:

$ mount userdata /mnt

Restoring a single partition


To restore your backup, you can simply use adb push. In my case:

$ adb push userdata /dev/block/platform/s3c-sdhci.0/by-name/userdata

failed to copy 'userdata' to '/dev/block/platform/s3c-sdhci.0/by-name/userdata': No space left on device

Alternative: Operating on the whole block device

Instead of backing up just a single partition, it is also possible to backup the whole flash device including all partitions. Example:

$ adb pull /dev/block/mmcblk0


  • On some devices, not all partitions are readable and, thus, cannot be backed up.
  • Please be careful with restores!
  • Accessing files inside this image is not straight-forward (but doable).

Filesystem-level approach

Filesystem-level backups only work for single partitions, take up as much space as the files on the particular filesystem you backup and it is easy to access individual files in them. I am using a combination of adb, netcat and tar to create and restore these backups.

Backing up your data

First, connect to your device via an adb shell:

$ adb shell

Then, change to the directory from where you want to create your backup. If your device was not automatically mounted, you have to do it first:

# mount /dev/block/platform/s3c-sdhci.0/by-name /data

Now change to this directory:

# cd /data

Now, start the netcat process:

# tar -cvp . | busybox nc -lp 5555

On the receiver side (desktop), set up adb port forwarding:

$ adb forward tcp:4444 tcp:5555

Then, start the process to receive the tar file:

$ nc -w 10 localhost 4444 > userdata.tar

You should see your files being packed up on the Android side:


Now wait for the process to exit.

Restoring your data

Again, on your receiver side (Android device), mount /data if it was not mounted yet and change in there:

# mount /dev/block/platform/s3c-sdhci.0/by-name /data
# cd /data

Now, start the tar extraction process:

# busybox nc -lp 5555 | tar -xpvf -

On the sender side (desktop), again, set up adb port forwarding:

$ adb forward tcp:4444 tcp:5555

And send the tar file:

$ cat userdata.tar | nc -q 2 localhost 4444

Now you should be able to see your previously backed up files getting restored...


For the filesystem-level part of the article I used and adapted the following sources:


Tuesday, 01 November 2016

Adoption of Free Software PDF readers in Italian Regional Public Administrations (third monitoring)

Tarin Gamberini | 09:30, Tuesday, 01 November 2016

The following monitoring shows that, in the last semester, ten Italian Regions have reduced advertisement about proprietary PDF readers on their website, and that a Region has raised its support to Free Software PDF readers.

Continue reading →

Monday, 31 October 2016

EIF v.3. – citizens demand for more Free Software, while businesses seek to promote true Open Standards

polina's blog | 16:35, Monday, 31 October 2016

As reported earlier, the European Commission (EC) is currently revising the European Interoperability Framework, a set of guidelines, recommendations and standards for the EU e-governmental services. In the end of June 2016, the EC closed its 12 weeks long open public consultation. The FSFE provided its answers to the EC where we highlighted the need for promotion of Open Standards and Free Software – key enablers of interoperability.

According to the recently published Factual Summary of the contributions received by the EC, we were not the only ones to see the plausible effect of Free Software and Open Standards to the interoperability in the EU public sector. The majority of the respondents identified “the use of proprietary IT solutions by public administrations, often creating a situation of vendor lock-in” to be a problem for interoperability in the EU.

According to the analysis, the majority of the comments raised by citizens on the draft EIF were related to:

“the need for openness (i.e. open data, open standards, open file formats, open source projects) and transparency.”

The additional action that was suggested to be included in the revised strategy by business/private organisations was to:

“promote the use of (true) open standards and support of standards in new technologies”.

We hope the European Commission will include the wishes of EU citizens and businesses and will follow them when revising the EIF.

Image Open by, CC BY-SA 2.0.

Sunday, 30 October 2016

Powers to Investigate

Iain R. Learmonth | 00:17, Sunday, 30 October 2016

The Communication Data Bill was draft legislation introduced first in May 2012. It sought to compel ISPs to store details of communications usage so that it can later be used for law enforcement purposes. In 2013 the passage of this bill into law had been blocked and the bill was dead.

In 2014 we saw the Data Retention and Investigatory Powers Act 2014 appear. This seemed to be in response to the Data Retention Directive being successfully challenged at the European Court of Justice by Digital Rights Ireland on human rights grounds, with a judgment given in 2014. It essentially reimplemented the Data Retention Directive along with a whole load of other nasty things.

The Data Retention and Investigatory Powers Act contained a sunset clause with a date set for 2016. This brings us to the Investigatory Powers Bill which it looks will be passing into law shortly.

Among a range of nasty powers, this legislation will be able to force ISPs to record metadata about every website you visit, every connection you make to a server on the Internet. This is sub-optimal for the privacy minded, with my primary concern being that this is a treasure trove of data and it’s going to be abused by someone. It’s going to be too much for someone to resist.

The existence of this power in the bill seemed to confuse the House of Lords:

It is not for me to explain why the Government want in the Bill a power that currently does not exist, because internet connection records do not exist, and which the security services say they do not want but which the noble and learned Lord says might be needed in the future. It is not for me to justify this power; I am saying to the House why I do not believe it is justified. The noble and learned Lord and the noble Lord, Lord Rosser, made the point that this is an existing power, but how can you have an existing power to acquire something that will not exist until the Bill is enacted?

– Lord Paddick (link)

Of course, the internet connection records are meaningless when your traffic is routed via a proxy or VPN, and there is a Kickstarter in progress that I would love to succeed: OnionDSL.

The premise of OnionDSL is that instead of having an IPv4/IPv6 connection to the Internet, you join a private network that does not provide any routing to the global Internet and instead provides only a Tor bridge. I cannot think of anything that I do from home that I cannot do via Tor and have been considering switching to Qubes OS as the operating system on my day-to-day laptop to allow me to direct basically everything through Tor.

The idea of provisioning a non-IP service via DSL is not new to me, I’ve come across it before with cjdns which provides an encrypted IPv6 network using public key cryptography for network address allocation and a distributed hash table for routing. Peering between cjdns nodes can be performed over Ethernet and cjdns over Ethernet could be provisioned in place of the traditional PPP over Ethernet (PPPoE) to provide access directly to cjdns without providing any routing to the global Internet.

If OnionDSL is funded, I think it’s very likely I would be considering becoming a customer. (Assuming the government doesn’t attempt to also outlaw Tor).

Saturday, 29 October 2016

live-wrapper 0.4 released!

Iain R. Learmonth | 03:21, Saturday, 29 October 2016

Last week saw the quiet upload of live-wrapper 0.4 to unstable. I would have blogged at the time, but there is another announcement coming later in this blog post that I wanted to make at the same time.

live-wrapper is a wrapper around vmdebootstrap for producing bootable live images using Debian GNU/Linux. Accompanied by the live-tasks package in Debian, this provides the toolchain and configuration necessary for building live images using Cinnamon, GNOME, KDE, LXDE, MATE and XFCE. There is also work ongoing to add a GNUstep image to this.

Building a live image with live-wrapper is easy:

sudo apt install live-wrapper
sudo lwr

This will build you a file named output.iso in the current directory containing a minimal live-image. You can the test this in QEMU:

qemu-system-x86_64 -m 2G -cdrom live.iso

You can find the latest documentation for live-wrapper here and any feedback you have is appreciated. So far it looks that booting from CD and USB with both ISOLINUX (BIOS) and GRUB (EFI) are all working as expected on real hardware.

The second announcement that I wanted to accompany this announcement is that we will be running a vmdebootstrap sprint where we will be working on live-wrapper at the MiniDebConf in Cambridge. I will be working on installer integration while Ana Custura will be investigating bootloaders and their customisation. I’d like to thank the Debian Project and those who have given donations to it for supporting our travel and accomodation costs for this sprint.

Wednesday, 26 October 2016

FOSDEM 2017 Real-Time Communications Call for Participation - fsfe | 06:39, Wednesday, 26 October 2016

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet ( contact
XMPP Planet Jabber ( contact
SIP Planet SIP ( contact
SIP (Español) Planet SIP-es ( contact

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Tuesday, 25 October 2016

Robot Repair

David Boddie - Updates (Full Articles) | 22:41, Tuesday, 25 October 2016

A while ago I suspended my exploration of Android development, having reevaluated my priorities and taken a step back from a period of fairly intensive work on tools. Back in June, I wrote a brief overview of the status of my compiler tools then didn't really look at them until this week. The first thing I needed to do was to figure out why I could no longer build packages for my phone.

Running to Stand Still

To get myself familiar with the build process again, I decided to rebuild a simple prototype for a game that I had been working on. Everything seemed to go as planned – the tools didn't complain about anything, so I hadn't left the code in a non-working state. However, when I used adb to upload the package to the phone, the package manager rejected it with a terse and unhelpful response, which adb helpfully relayed back to the console:


If you search the Web for this error, you will quickly discover that it seems to be one of its favourite catchphrases, covering all sorts of problems it finds with packages. Unfortunately, this makes finding useful advice about it to be very difficult — the Android site doesn't seem to include anything useful about errors delivered via adb, probably because the average developer is supposed to be using Android Studio, not “messing about” at the command line. Having spent a few years working on API documentation and manuals for developers, I imagine that someone thought that information about command line tools would somehow take something away from the beautiful learning journey they had planned for new developers, so decided not to make that a priority. Perhaps my cynicism is uncalled for. In any case, trawling the Web for answers led to the usual sites where I found desperate programmers wailing and thrashing around while onlookers suggested things like cleaning their project and reading the documentation. How helpful!

To cut a long story short, a site I'd visited earlier provided a way to help diagnose the problem. Unpacking packages I'd built before the summer – using unzip because APK files are just ZIP files – and packages I had just built this week, I was then able to inspect their certificates with the following command:

openssl asn1parse -i -inform DER -in META-INF/CERT.RSA

It turned out that when signing my packages, openssl was including a field that claimed the digest used was sha1 but was using sha256 to create the digest. This was not happening in June, and it turns out that an update to the Debian openssl package in September (version 1.0.1t-1) included a change to the default message digest algorithm used. I “fixed” my problem by ensuring that my new signing certificate is created using the same digest algorithm that I use when signing packages. Still, not everything worked straight away – installing my newly created package on the phone failed with this complaint:


However, searching for this error proved much more fruitful and enlightening than for the previous one, and the solution – uninstall the old version of the application – was simple and quick.

Rebooting the Robot?

It would probably be good to document a few things I did earlier in the year, though I'm even less inclined to stare at Android documentation than I was before. It would be useful to be able to make the occasional small application for my own purposes and I'm quite accustomed to the peculiarities of my own toolchain, though others might find it a little strange to get used to.

Categories: Free Software, Android

Sunday, 23 October 2016

Defending the 99%

Paul Boddie's Free Software-related blog » English | 22:16, Sunday, 23 October 2016

In the context of a fairly recent discussion of Free Software licence enforcement on the Linux Kernel Summit mailing list, where Matthew Garrett defended the right of users to enjoy the four freedoms upheld by the GPL, but where Linus Torvalds and others insisted that upstream corporate contributions are more important and that it doesn’t matter if the users get to see the source code, Jonas Öberg made the remarkable claim that…

“It’s almost as if Matthew is talking about benefits for the 1% whereas Linus is aiming for the benefit of the 99%.”

So, if we are to understand this correctly, a highly-privileged and famous software developer, whose position on the “tivoization” of hardware was that users shouldn’t expect to have any control over the software running on their purchases, is now seemingly echoing the sentiments of a billionaire monopolist who once said that users didn’t need to see the source code of the programs they use. That particular monopolist stated that the developers of his company’s software would take care of everything and that the users would “rely on us” because the mere notion of anybody else interacting with the source code was apparently “the opposite of what’s supposed to go on”.

Here, this famous software developer’s message is that corporations may in future enrich his and his colleagues’ work so that a device purchaser may, at some unspecified time in the future, get to enjoy a properly-maintained version of the Linux kernel running inside a purchase of theirs. All the purchaser needs to do is to stop agitating for their four freedom rights and instead effectively rely on them to look after everything. (Where “them” would be the upstream kernel development community incorporating supposedly-cooperative corporate representatives.)

Now, note once again that such kernel code would only appear in some future product, not in the now-obsolete or broken product that the purchaser currently has. So far, the purchaser may be without any proper insight into that product – apart from the dubious consolation of knowing that the vendor likes Linux enough to have embedded it in the product – and they may well be left without any control over what software the product actually ends up running. So much for relying on “them” to look after the pressing present-day needs of users.

And even with any mythical future product unboxed and powered by a more official form of Linux, the message from the vendor may very well be that at no point should the purchaser ever feel entitled to look inside the device at the code, to try and touch it, to modify it, improve or fix it, and they should absolutely not try and use that device as a way of learning about computing, just as the famous developer and his colleagues were able to do when they got their start in the industry. So much for relying on “them” to look after the future needs of users.

(And let us not even consider that a bunch of other code delivered in a product may end up violating other projects’ licences because those projects did not realise that they had to “make friends” with the same group of dysfunctional corporations.)

Somehow, I rather feel that Matthew Garrett is the one with more of an understanding of what it is like to be among the 99%: where you buy something that could potentially be insecure junk as soon as it is unboxed, where the vendor might even arrogantly declare that the licensing does not apply to them. And he certainly has an understanding of what the 99% actually want: to be able to do something about such things right now, rather than to be at the mercy of lazy, greedy and/or predatory corporate practices; to finally get the product with all the features you thought your money had managed to buy you in the first place.

All of this ground-level familiarity seems very much in contrast to that of some other people who presumably only “hear” via second- or third-hand accounts what the average user or purchaser supposedly experiences, whose privilege and connections will probably get “them” what they want or need without any trouble at all. Let us say that in this dispute Matthew Garrett is not the one suffering from what might be regarded as “benevolent dictator syndrome”.

The Misrepresentation of Others

And one thing Jonas managed to get taken in by was the despicable and continued misrepresentation of organisations like the Software Freedom Conservancy, their staff, and their activities. Despite the public record showing otherwise, certain participants in the discussion were only too happy to perpetuate the myth of such organisations being litigious, and to belittle those organisations’ work, in order to justify their own hostile and abusive tone towards decent, helpful and good people.

No-one has ever really been forced to choose between cooperation, encouragement, community-building and the pursuit of enforcement. Indeed, the organisations pursuing responsible enforcement strategies, in reminding people of their responsibilities, are actually encouraging companies to honour licences and to respect the people who chose such licences for their works. The aim is ultimately to integrate today’s licence violators into the community of tomorrow as honest, respectable and respectful participants.

Community-building can therefore occur even when pointing out to people what they have been doing wrong. But without any substance, licences would provide only limited powers in persuading companies to do the right thing. And the substance of licences is rooted in their legal standing, meaning that occasionally a licence-violating entity might need to be reminded that its behaviour may be scrutinised in a legal forum and that the company involved may experience negative financial and commercial effects as a result.

Reminding others that licences have substance and requiring others to observe such licences is not “force”, at least not the kind of illegitimate force that is insinuated by various factions who prefer the current situation of widespread licence violation and lip service to the Linux brand. It is the instrument through which the authors of Free Software works can be heard and respected when all other reasonable channels of redress have been shut down. And, crucially, it is the instrument through which the needs of the end-user, the purchaser, the people who do no development at all – indeed, all of the people who paid good money and who actually funded the product making use of the Free Software at risk, whose money should also be funding the development of that software – can be heard and respected, too.

I always thought that “the 1%” were the people who had “got theirs” already, the privileged, the people who promise the betterment of everybody else’s lives through things like trickle-down economics, the people who want everything to go through them so that they get to say who benefits or not. If pandering to well-financed entities for some hypothetical future pay-off while they conduct business as usual at everybody else’s expense is “for the benefit of the 99%”, then it seems to me that Jonas has “the 1%” and “the 99%” the wrong way round.

Saturday, 22 October 2016

The Domain Name System

Iain R. Learmonth | 18:15, Saturday, 22 October 2016

As I posted yesterday, we released PATHspider 1.0.0. What I didn’t talk about in that post was an event that occured only a few hours before the release.

Everything was going fine, proofreading of the documentation was in progress, a quick git push with the documentation updates and… CI FAILED!?! Our CI doesn’t build the documentation, only tests the core code. I’m planning to release real soon and something has broken.

Starting to panic.

irl@orbiter# ./
Ran 16 tests in 0.984s


This makes no sense. Maybe I forgot to add a dependency and it’s been broken for a while? I scrutinise the dependencies list and it all looks fine.

In fairness, probably the first thing I should have done is look at the build log in Jenkins, but I’ve never had a failure that I couldn’t reproduce locally before.

It was at this point that I realised there was something screwy going on. A sigh of relief as I realise that there’s not a catastrophic test failure but now it looks like maybe there’s a problem with the University research group network, which is arguably worse.

Being focussed on getting the release ready, I didn’t realise that the Internet was falling apart. Unknown to me, a massive DDoS attack against Dyn, a major DNS host, was in progress. After a few attempts to debug the problem, I hardcoded a line into /etc/hosts, still believing it to be a localised issue.

I’ve just removed this line as the problem seems to have resolved itself for now. There are two main points I’ve taken away from this:

  • CI failure doesn’t necessarily mean that your code is broken, it can also indicate that your CI infrastructure is broken.
  • Decentralised internetwork routing is pretty worthless when the centralised name system goes down.

This afternoon I read a post by [tj] on the 57North Planet, and this is where I learnt what had really happened. He mentions multicast DNS and Namecoin as distributed name system alternatives. I’d like to add some more to that list:

Only the first of these is really a distributed solution.

My idea with ICMP Domain Name Messages is that you send an ICMP message to a webserver. Somewhere along the path, you’ll hit either a surveillance or censorship middlebox. These middleboxes can provide value by caching any DNS replies that are seen so that an ICMP DNS request message will cause the message to not be forwarded but a reply is generated to provide the answer to the query. If the middlebox cannot generate a reply, it can still forward it to other surveillance and censorship boxes.

I think this would be a great secondary use for the NSA and GCHQ boxen on the Internet, clearly fits within the scope of “defending national security” as if DNS is down the Internet is kinda dead, plus it’d make it nice and easy to find the boxes with PATHspider.

Friday, 21 October 2016

PATHspider 1.0.0 released!

Iain R. Learmonth | 23:46, Friday, 21 October 2016

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.

For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.

PATHspider is a framework for performing and analyzing these measurements, while the actual A/B test can be easily customized. Late on the 21st October, we released version 1.0.0 of PATHspider and it’s ready for “production” use (whatever that means for Internet research software).

Our first real release of PATHspider was version 0.9.0 just in time for the presentation of PATHspider at the 2016 Applied Networking Research Workshop co-located with IETF 96 in Berlin earlier this year. Since this release we have made a lot of changes and I’ll talk about some of the highlights here (in no particular order):

Switch from twisted.plugin to straight.plugin

While we anticipate that some plugins may wish to use some features of Twisted, we didn’t want to have Twisted as a core dependency for PATHspider. We found that straight.plugin was not just a drop-in replacement but it simplified the way in which 3rd-party plugins could be developed and it was worth the effort for that alone.

Library functions for the Observer

PATHspider has an embedded flow-meter (think something like NetFlow but highly customisable). We found that even with the small selection of plugins that we had we were duplicating code across plugins for these customisations of the flow-meter. In this release we now provide library functions for common needs such as identifying TCP 3-way handshake completions or identifying ICMP Unreachable messages for flows.

New plugin: DSCP

We’ve added a new plugin for this release to detect breakage when using DiffServ code points to achieve differentiated services within a network.

Plugins are now subcommands

Using the subparsers feature of argparse, all plugins including 3rd-party plugins will now appear as subcommands to the PATHspider command. This makes every plugin a first-class citizen and makes PATHspider truly generalised.

We have an added benefit from this that plugins can also ask for extra arguments that are specific to the needs of the plugin, for example the DSCP plugin allows the user to select which code point to send for the experimental test.

Test Suite

PATHspider now has a test suite! As the size of the PATHspider code base grows we need to be able to make changes and have confidence that we are not breaking code that another module relies on. We have so far only achieved 54% coverage of the codebase but we hope to improve this for the next release. We have focussed on the critical portions of data collection to ensure that all the results collected by PATHspider during experiments is valid.

DNS Resolver Utility

Back when PATHspider was known as ECNSpider, it had a utility for resolving IP addresses from the Alexa top 1 million list. This utility has now been fully integrated into PATHspider and appears as a plugin to allow for repeated experiments against the same IP addresses even if the DNS resolver would have returned a different addresss.


Documentation is definitely not my favourite activity, but it has to be done. PATHspider 1.0.0 now ships with documentation covering commandline usage, input/output formats and development of new plugins.

If you’d like to check out PATHspider, you can find the website at

Debian packages will be appearing shortly and will find their way into stable-backports within the next 2 weeks (hopefully).

Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.

Thursday, 20 October 2016

Starting to use the Fellowship Card

vanitasvitae's blog » englisch | 19:44, Thursday, 20 October 2016

I recently became a fellow of the FSFE and so I received a nice letter containing the FSFE fellowship OpenPGP smartcard.

After a quick visual examination I approved the card to be *damn cool*, even though the portrait format of the print of it still confuses me when I look at it. I especially like, how optimistically many digits the membership number field has (we can do it!!!). What I don’t like, is the non-https link on the bottom of the backside.

But how to use it now?

It took me some time to figure out, what that card exactly is. The FSFE overview page about the fellowship card misses the information, that this is a OpenPGP V2 card, which might be handy when choosing key sizes later on. I still don’t know, whether the card is version 2.0 or 2.1, but for my usecase it doesn’t really matter. So, what exactly is a smart-card and what CAN I actually do with it?

Well, OpenPGP is a system that allows to encrypt and sign emails, files and other information. That is and was nothing new to me, but what actually was new to me is the fact, that the encryption keys can be stored elsewhere than on the computer or phone. That intrigued me. So why not jump right into it and get some keys on there? – But where to plug it in?

My laptop has no smart-card slot, but there is that big ugly slit at one side, that never really came to value for me, simply because most peripherals that I wanted to connect to my computer, I connected via loved USB. It’s an ExpressCard slot. I knew that there are extension cards that can be fit in there, so they aren’t in the way (like eg. a USB dongle would be). There must be smart-card readers for ExpressCards, right? Right. And since I want to read mails when I’m on a train or bus, I’d find it convenient, when the card reader vanishes inside my laptop.

So I went online and searched for ExpressCard smart-card readers. I ended up buying a Lenovo Gemplus smart-card reader for about 25€. Then I waited. After half an hour I asked myself, if that particular device would work well with GNU/Linux (I use Debian testing on my ThinkPad), so I did some research and reassured me, that there are free drivers. Nice!

While I waited for my card to arrive, I received another letter with my admin pin for the card. Just for the record ;)

Some days later the smart-card reader arrived and I happily shoved it into the ExpressCard slot. I inserted the card and checked via

gpg –card-status

what’s on the card, but I got an error message (unfortunately I don’t remember what exactly it was) about that there was no card available. So I did some more research and it turns out I had to install the package


to make it work. After the installation, my smart-card was detected, so I could follow the FSFEs tutorial on how to use the card. So I booted into a live Ubuntu that I had laying around, shut off the internet connection, realized that I needed to install pcscd here as well, reactivated the internet, installed pcscd and disconnected again. At that point in time I wondered, what exact kind of OpenPGP card I had. Somewhere else (forgot where) I read, that the fellowship card is a version 2.0 card, so I could go full 4096 bit RSA. I generated some new keys, which took forever! While I did so, I wrote some nonsense stories into a text editor to generate enough entropy. It still took about 15 minutes for each key to generate (and a lot of nonsense!). What confused me, was the process of removing secret keys and adding them back later (see the tutorial.)

But I did it and now I’m proud owner of a fully functional OpenPGP smart-card + reader. I had some smaller issues with an older GPG key, that I simply revoked (it was about time anyway) and now everything works as intended. I’m a little bit sad, because nearly none of my contacts uses GPG/PGP, so I had to write mails to myself in oder to test the card, but that feel when that little window opens, asking me to insert my card and/or enter my pin pays up for everything :)

My main usecase for the card became signing git commits though. Via

git commit -S -m “message”

git commits can be signed with the card (works with normal gpg keys without a card as well)! You just have to add your keys fingerprint to the .gitconfig. Man, that really adds to the experience. Now every time I sign a commit, I feel as if my work is extremely important or I’m a top secret agent or something. I can only recommend that to everyone!

Of course, I know that I might sound a little silly in the last paragraph, but nevertheless, I hope I could at least entertain somebody with my first experiences with the FSFE fellowship card. What I would add to the wish list for a next version of the card is a little field to note the last digits of the fingerprint of the key thats stored on the card. That could be handy for remembering the fingerprint when there is no card reader available. Also it would be quite nice, if the card was usable in combination with smart-phones, even though I don’t know, how exactly that could be accomplished (maybe a usb connector on the card?)

Anyways that’s the end of my first blog post. I hope you enjoyed it. Btw: My GPG key has the ID 0xa027db2f3e1e118a :)


Choosing smartcards, readers and hardware for the Outreachy project - fsfe | 07:25, Thursday, 20 October 2016

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Tuesday, 18 October 2016

My participation in the first FSFE Summit and 15th anniversary celebration

André on Free Software » English | 19:19, Tuesday, 18 October 2016

FSFE Summit. 2-4 september. BCC Berlin

From 2 to 4 September I’ve been in Free Berlin to participate in the first FSFE Summit and in the 15th anniversary celebration. Thanks to FSFE I’ve met interesting people, discovered surprising technologies and heard inspiring talks from people of all walks of life. It was an honour to speak about translating for Free Software.

In the BCC I attended the following:

Let’s enable people to control technology in their own language

Cryptie and I gave a talk about translating for Free Software, titled “Let’s enable people to control technology in their own language“.

FSFE Birthday party

FSFE had it’s 15th birthday party in c-base, which ensured the event to be future compatible. With other members of the movement I was declared a “FSFE local hero”, for which I’m very thankful to the FSFE.

With special thanks to Erik Albers and Cellini Bedi, who used their skills to organise a very positive, inspiring and memorable experience.

Sunday, 16 October 2016

Building Inferno Disk Images

David Boddie - Updates (Full Articles) | 21:14, Sunday, 16 October 2016

In my previously article about Inferno, I mentioned that I was planning to write some notes about installing and using Inferno in a Wiki attached to a fork/clone of the official Mercurial repository for Inferno. I got a bit distracted by other projects and felt that I couldn't write about this until I'd managed to get those projects out of the way. Now I've found the time to tidy up the scripts I wrote and write one or two things about them. The aim has been to create a hard disk image containing a suitable kernel that will boot up successfully and set up a user environment, ideally graphical but a shell is enough to begin with.

Creating a Bootable Kernel

I began with Brian Stuart's How to Build a Native Inferno Kernel for the PC, which describes how to build a kernel for booting from a floppy disk image. Really, I wanted to build a hard disk image, but this was a good place to start. I wrote some instructions based on the above guide, but using some helper scripts of my own that I put in the tools directory of the Wiki repository.

Building a kernel is quite straightforward once you realise that the process is done from within the os/pc subdirectory of the inferno-os distribution. Since Inferno comes from the branch of the Unix family tree where it is customary to build your own compilers as part of building your operating system, actually compiling a kernel is a fairly smooth process. Of course, we are relying on the GCC toolchain to bootstrap this process, which also works smoothly.

Installing a Bootable Kernel

Brian Stuart wrote another guide, Installing Native Inferno on a PC, which describes how to install a pre-built disk image from a CD-based installer. I considered building a bootable CD like this but that looked like a long diversion involving tools I had no experience with. I wondered why an installer like this was needed — my plan was to try and do everything that it would do in the hosted environment that you can run under GNU/Linux. At least, I wanted to be able to create a disk image with partitions, format the file systems and copy a pre-built kernel into it. However, I found that the disk formatting and partitioning tools behave differently in the hosted environment with disk image files to the way they behave on a real system (or one running in qemu) with real disks to manipulate.

One potential solution seemed to be to partition and format a disk image under Linux and try and mount the partitions in the hosted environment but I failed to find a way to mount a file as a disk image using an offset into the file – the equivalent to mount with the offset option under Linux. In addition, while I could create a FAT partition under Linux and copy the kernel into it, a difference in the format mkfs.vfat generates and the one expected by the bootloader used for Plan 9 and Inferno meant that the kernel was never loaded. So, it seemed easiest to build the kernel under Linux, handle the disk partitioning and formatting under Inferno running in qemu, and somehow copy the files into the ready-made partitions.

I briefly experimented with creating a second hard disk image so that Inferno in qemu could just copy files from there into the installation image but that didn't go as planned – the supported file systems either couldn't handle large partitions (FAT16) or had other issues.

In the end, given the problems with mounting sections of disk images in hosted Inferno, the installation scripts do some chopping up and stitching together of disk images. I'm not very satisfied with this solution but it seems to work. The installation process now looks roughly like this:

  1. In Linux:
    1. Build a hosted environment. This gives us the Inferno compiler suite that we run under Linux and tools that we use in the hosted environment.
    2. Build a minimal kernel and running environment for the PC.
    3. Boot the kernel in qemu, supplying a hard disk image so that a script in the running environment can install a master boot record, partition the disk and format the FAT and kfs subpartitions. Exit qemu.
    4. Extract the FAT and kfs subpartitions from the disk image and copy them into the hosted environment where they can be manipulated.
    5. Build a kernel with more features for the final installation and copy it into the hosted environment.
  2. In the hosted Inferno environment:
    1. Mount the subpartition image files.
    2. Copy the fully-featured kernel into the FAT subpartion.
    3. Install all the standard files from an Inferno distribution into the kfs subpartition.
  3. In Linux:
    1. Stitch the subpartitions back together to make a final disk image.
    2. Boot the disk image in qemu and test it.

I've put some basic instructions for creating a fairly simple disk image in the Wiki mentioned above. In theory, it should build a 256 MB disk image containing an 32-bit x86 kernel and user space, ready to boot in qemu. I dug around in the configuration files and enabled support for the rtl8139 network card so that the installation can potentially do something useful, but couldn't get VGA support working. That's something that will have to wait for a future revision of the tools.

What's Next?

I wandered away from Inferno for a while and came back to publish these scripts. I'm interested in seeing if it's possible to bring up a natively-running desktop environment if I can get some hints about how to initialise VGA support properly. Since Plan 9's desktop can be run on native hardware, it would be a shame if Inferno couldn't do the same. In the meantime there are always other projects to look into.

Category: Free Software

Nexus 9 lockscreen rotate

Norbert Tretkowski | 04:00, Sunday, 16 October 2016

Since upgrading my Nexus 9 tablet to the latest snapshot of CyanogenMod 13.0, rotating the lockscreen did no longer work. Rotating the homescreen worked fine. I found the solution on XDA Developers in the Nexus 6 forum.

Just add these lines to the /system/build.prop file:


After a reboot, rotating the lockscreen works fine again.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English —  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog