Planet Fellowship (en)

Wednesday, 25 November 2015

Introducing elfpatch, for safely patching ELF binaries - fsfe | 10:30, Wednesday, 25 November 2015

I recently had a problem with a program behaving badly. As a developer familiar with open source, my normal strategy in this case would be to find the source and debug or patch it. Although I was familiar with the source code, I didn't have it on hand and would have faced significant inconvenience having it patched, recompiled and introduced to the runtime environment.

Conveniently, the program has not been stripped of symbol names, and it was running on Solaris. This made it possible for me to whip up a quick dtrace script to print a log message as each function was entered and exited, along with the return values. This gives a precise record of the runtime code path. Within a few minutes, I could see that just changing the return value of a couple of function calls would resolve the problem.

On the x86 platform, functions set their return value by putting the value in the EAX register. This is a trivial thing to express in assembly language and there are many web-based x86 assemblers that will allow you to enter the instructions in a web-form and get back hexadecimal code instantly. I used the bvi utility to cut and paste the hex code into a copy of the binary and verify the solution.

All I needed was a convenient way to apply these changes to all the related binary files, with a low risk of error. Furthermore, it needed to be clear for a third-party to inspect the way the code was being changed and verify that it was done correctly and that no other unintended changes were introduced at the same time.

Finding or writing a script to apply the changes seemed like the obvious solution. A quick search found many libraries and scripts for reading ELF binary files, but none offered a patching capability. Tools like objdump on Linux and elfedit on Solaris show the raw ELF data, such as virtual addresses, which must be converted manually into file offsets, which can be quite tedious if many binaries need to be patched.

My initial thought was to develop a concise C/C++ program using libelf to parse the ELF headers and then calculating locations for the patches. While searching for an example, I came across pyelftools and it occurred to me that a Python solution may be quicker to write and more concise to review.

elfpatch (on github) was born. As input, it takes a text file with a list of symbols and hexadecimal representations of the patch for each symbol. It then reads one or more binary files and either checks for the presence of the symbols (read-only mode) or writes out the patches. It can optionally backup each binary before changing it.

Drone strikes coming to Molenbeek? - fsfe | 07:28, Wednesday, 25 November 2015

The St Denis siege last week and the Brussels lockdown this week provides all of us in Europe with an opportunity to reflect on why over ten thousand refugees per day have been coming here from the middle east, especially Syria.

At this moment, French warplanes and American drones are striking cities and villages in Syria, killing whole families in their effort to shortcut the justice system and execute a small number of very bad people without putting them on trial. Some observers estimate air strikes and drones kill twenty innocent people for every one bad guy. Women, children, the sick, elderly and even pets are most vulnerable. The leak of the collateral murder video simultaneously brought Wikileaks into the public eye and demonstrated how the crew of a US attack helicopter had butchered unarmed civilians and journalists like they were playing a video game.

Just imagine that the French president had sent the fighter jets to St Denis and Molenbeek instead of using law enforcement. After all, how are the terrorists there any better or worse than those in Syria, don't they deserve the same fate? Or what if Obama had offered to help out with a few drone strikes on suburban Brussels? After all, if the drones are such a credible solution for Syria's future, why won't they solve Brussels' (perceived) problems too?

If the aerial bombing "solution" had been attempted in a western country, it would have lead to chaos. Half the population of Paris and Brussels would find themselves camping at the migrant camps in Calais, hoping to sneak into the UK in the back of a truck.

Over a hundred years ago, Russian leaders proposed a treaty agreeing never to drop bombs from balloons and the US and UK happily signed it. Sadly, the treaty wasn't updated after the invention of fighter jets, attack helicopters, rockets, inter-continental ballistic missiles, satellites and drones.

The reality is that asymmetric warfare hasn't worked and never will work in the middle east and as long as it is continued, experts warn that Europe may continue to face the consequences of refugees, terrorists and those who sympathize with their methods. By definition, these people can easily move from place to place and it is ordinary citizens and small businesses who will suffer a lot more under lockdowns and other security measures.

In our modern world, people often look to technology for shortcuts. The use of drones in the middle east is a shortcut from a country that spent enormous money on ground invasions of Iraq and Afghanistan and doesn't want to do it again. Unfortunately, technological shortcuts can't always replace the role played by real human beings, whether it is bringing law and order to the streets or in any other domain.

Aerial bombardment - by warplane or by drone - carries an implicitly racist message, that the people abused by these drone attacks are not equivalent to the rest of us, they can't benefit from the normal procedures of justice, they don't have rights, they are not innocent until proven guilty and they are expendable.

The French police deserve significant credit for the relatively low loss of life in the St Denis siege. If their methods and results were replicated in Syria and other middle eastern hotspots, would it be more likely to improve the situation in the long term than drone strikes?

Friday, 20 November 2015

Databases of Muslims and homosexuals? - fsfe | 18:02, Friday, 20 November 2015

One US presidential candidate has said a lot recently, but the comments about making a database of Muslims may qualify as the most extreme.

Of course, if he really wanted to, somebody with this mindset could find all the Muslims anyway. A quick and easy solution would involve tracing all the mobile phone signals around mosques on a Friday. Mr would-be President could compel Facebook and other social networks to disclose lists of users who identify as Muslim.

Databases are a dangerous side-effect of gay marriage

In 2014 there was significant discussion about Brendan Eich's donation to the campaign against gay marriage.

One fact that never ranked very highly in the debate at the time is that not all gay people actually support gay marriage. Even where these marriages are permitted, not everybody who can marry now is choosing to do so.

The reasons for this are varied, but one key point that has often been missed is that there are two routes to marriage equality: one involves permitting gay couples to visit the register office and fill in a form just as other couples do. The other route to equality is to remove all the legal artifacts around marriage altogether.

When the government does issue a marriage certificate, it is not long before other organizations start asking for confirmation of the marriage. Everybody from banks to letting agents and Facebook wants to know about it. Many companies outsource that data into cloud CRM systems such as Salesforce. Before you know it, there are numerous databases that somebody could mine to make a list of confirmed homosexuals.

Of course, if everybody in the world was going to live happily ever after none of this would be a problem. But the reality is different.

While discrimination: either against Muslims or homosexuals - is prohibited and can even lead to criminal sanctions in some countries, this attitude is not shared globally. Once gay people have their marriage status documented in the frequent flyer or hotel loyalty program, or in the public part of their Facebook profile, there are various countries where they are going to be at much higher risk of prosecution/persecution. The equality to marry in the US or UK may mean they have less equality when choosing travel destinations.

Those places are not as obscure as you might think: even in Australia, regarded as a civilized and laid-back western democracy, the state of Tasmania fought tooth-and-nail to retain the criminalization of virtually all homosexual conduct until 1997 when the combined actions of the federal government and high court compelled the state to reform. Despite the changes, people with some of the most offensive attitudes are able to achieve and retain a position of significant authority. The same Australian senator who infamously linked gay marriage with bestiality has successfully used his position to set up a Senate inquiry as a platform for conspiracy theories linking Halal certification with terrorism.

There are many ways a database can fall into the wrong hands

Ironically, one of the most valuable lessons about the risk of registering Muslims and homosexuals was an injustice against the very same tea-party supporters a certain presidential candidate is trying to woo. In 2013, it was revealed IRS employees had started applying a different process to discriminate against groups with Tea party in their name.

It is not hard to imagine other types of rogue or misinformed behavior by people in positions of authority when they are presented with information that they don't actually need about somebody's religion or sexuality.

Beyond this type of rogue behavior by individual officials and departments, there is also the more sinister proposition that somebody truly unpleasant is elected into power and can immediately use things like a Muslim database, surveillance data or the marriage database for a program of systematic discrimination. France had a close shave with this scenario in the 2002 presidential election when
Jean-Marie Le Pen, who has at least six convictions for racism or inciting racial hatred made it to the final round in a two-candidate run-off with Jacques Chirac.

The best data security

The best way to be safe- wherever you go, both now and in the future - is not to have data about yourself on any database. When filling out forms, think need-to-know. If some company doesn't really need your personal mobile number, your date of birth, your religion or your marriage status, don't give it to them.

Converting an existing installation to LUKS using luksipc

Losca | 14:14, Friday, 20 November 2015

This is a burst of notes that I wrote in an e-mail in June when asked about it, and I'm not going to have any better steps since I don't remember even that amount as back then. I figured it's better to have it out than not.

So... if you want to use LUKS In-Place Conversion Tool, the notes below on converting a shipped-with-Ubuntu Dell XPS 13 Developer Edition (2015 Intel Broadwell model) may help you. There were a couple of small learnings to be had...
The page<wbr></wbr>linux/luksipc/ itself is good and without errors, although funnily uses reiserfs as an example. It was only a bit unclear why I did save the initial_keyfile.bin since it was then removed in the next step (I guess it's for the case you want to have a recovery file hidden somewhere in case you forget the passphrase).

For using the tool I booted from a 14.04.2 LTS USB live image and operated there, including downloading and compiling luksipc in the live session. The exact reason of resizing before luksipc was a bit unclear to me at first so I simply indeed resized the main rootfs partition and left unallocated space in the partition table.

Then finally I ran ./luksipc -d /dev/sda4 etc.

I realized I want /boot to be on an unencrypted partition to be able to load the kernel + initrd from grub before entering into LUKS unlocking. I couldn't resize the luks partition anymore since it was encrypted... So I resized what I think was the empty small DIAGS partition (maybe used for some system diagnostic or something, I don't know), or possibly the next one that is the actual recovery partition one can reinstall the pre-installed Ubuntu from. And naturally I had some problems because it seems vfatresize tool didn't do what I wanted it to do and gparted simply crashed when I tried to use it first to do the same. Anyway, when done with getting some extra free space somewhere, I used the remaining 350MB for /boot where I copied the rootfs's /boot contents to.

After adding the passphrase in luks I had everything encrypted etc and decryptable, but obviously I could only access it from a live session by manual cryptsetup luksOpen + mount /dev/mapper/myroot commands. I needed to configure GRUB, and I needed to do it with the grub-efi-amd64 which was a bit unfamiliar to me. There's also grub-efi-amd64-signed I have installed now but I'm not sure if it was required for the configuration. Secure boot is not enabled by default in BIOS so maybe it isn't needed.

I did GRUB installation – I think inside rootfs chroot where I also mounted /dev/sda6 as /boot (inside the rootfs chroot), ie mounted dev, sys with -o bind to under the chroot (from outside chroot) and mount -t proc proc proc too. I did a lot of trial and effort so I surely also tried from outside the chroot, in the live session, using some parameters to point to the mounted rootfs's directories...

I needed to definitely install cryptsetup etc inside the encrypted rootfs with apt, and I remember debugging for some time if they went to the initrd correctly after I executed mkinitramfs/update-initramfs inside the chroot.

At the end I had grub asking for the password correctly at bootup. Obviously I had edited the rootfs's /etc/fstab to include the new /boot partition, I changed / to be "UUID=/dev/mapper/myroot /     ext4    errors=remount-ro 0       ", kept /boot/efi as coming from the /dev/sda1 and so on. I had also added "myroot /dev/sda4 none luks" to /etc/crypttab. I seem to also have GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda4:myroot root=/dev/mapper/myroot" in /etc/default/grub.

The only thing I did save from the live session was the original partition table if I want to revert.

So the original was:

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 6765 sectors (3.3 MiB)
Number  Start (sector)    End (sector)  Size       Code  Name
1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1107968         7399423   3.0 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200

And I now have:

Number  Start (sector)    End (sector)  Size       Code  Name

1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1832960         7399423   2.7 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200
6         1107968         1832959   354.0 MiB   8300

So it seems I did not edit DIAGS (and it was also originally just 40MB) but did something with the recovery partition while preserving its contents. It's a FAT partition so maybe I was able to somehow resize it after all.

The 16GB partition is the default swap partition. I did not encrypt it at least yet, I tend to not run into swap anyway ever in my normal use with the 8GB RAM.

If you go this route, good luck! :D

It’s NotABug …

Marcus's Blog | 07:39, Friday, 20 November 2015

As Gitorious recently faded away, we have been searching for a Git Hosting solution for our FSFE Localgroup Zurich. We have evaluated several options including self-hosting. The latter has been tested with a software called GitBucket but it seems that a lot of recourses are required for that. At least it does not work well on my Atom-based Server.

Luckily we have discovered a hosting service that respects User Freedom called NotABug. It is based on a Free Software solution called gogs and has a lot of features, e.g. the ability to create and manage organizations. I have already started to port our repositories. Some of them are public and some private.

If you are interested in setting up a manageable Git repository, please give gogs a try, either by installing it on your own server or by joining a hosting solution like NotABug. Hopefully it will be possible to install a Git hosting service on a Freedombox as well in the near future.

Wednesday, 18 November 2015

Improving DruCall and JSCommunicator user interface - fsfe | 17:45, Wednesday, 18 November 2015

DruCall is one of the easiest ways to get up and running with WebRTC voice and video calling on your own web site or blog. It is based on 100% open source and 100% open standards - no binary browser plugins and no lock-in to a specific service provider or vendor.

On Debian or Ubuntu, just running a command such as

# apt-get install -t jessie-backports drupal7-mod-drucall

will install Drupal, Apache, MySQL, JSCommunicator, JsSIP and all the other JavaScript library packages and module dependencies for DruCall itself.

The user interface

Most of my experience is in server-side development, including things like the powerful SIP over WebSocket implementation in the reSIProcate SIP proxy repro.

In creating DruCall, I have simply concentrated on those areas related to configuring and bringing up the WebSocket connection and creating the authentication tokens for the call.

Those things provide a firm foundation for the module, but it would be nice to improve the way it is presented and optimize the integration with other Drupal features. This is where the projects (both DruCall and JSCommunicator) would really benefit from feedback and contributions from people who know Drupal and web design in much more detail.

Benefits for collaboration

If anybody wants to collaborate on either or both of these projects, I'd be happy to offer access to a pre-configured SIP WebSocket server in my lab for more convenient testing. The DruCall source code is a hosted project and the JSCommunicator source code is on Github.

When you get to the stage where you want to run your own SIP WebSocket server as well then free community support can also be provided through the repro-user mailing list. The free, online RTC Quick Start Guide gives a very comprehensive overview of everything you need to do to run your own WebRTC SIP infrastructure.

Tuesday, 17 November 2015

imip-agent: Integrating Calendaring with E-Mail

Paul Boddie's Free Software-related blog » English | 21:23, Tuesday, 17 November 2015

Longer ago than I had, until now, realised, I wrote an article about my ongoing exploration of groupware and, specifically, calendaring. As I noted in that article, I felt that a broader range of options may be needed for those wishing to expand their use of communications technologies beyond plain e-mail and into the structured exchange of other kinds of information, whilst retaining and building upon that e-mail infrastructure.

And I noted that more often than not, people wanting to increase their ambitions in this regard are often confronted with the prospect of abandoning what they already use successfully, instead being obliged to adopt a complete package of technologies, some of which they may not even need. While proprietary software and service vendors might pursue such strategies of persuasion – getting the big sale or the big contract – it is baffling that Free Software projects might also put potential users on the spot in the same way. After all, Free Software is very much about choice and control.

So, as I spelled out in that previous article, there may be some mileage in trying to offer extensions to existing infrastructure so that people can increase their communications capabilities whilst retaining the technologies they already know. And in some depth (and at some length), I described what a mail-centred calendaring solution might need to provide in order to address most people’s needs. Finally, I promised to make my own efforts available in this area so that anyone remotely interested in the topic might get some benefit from it.

Last month, I started a very brief exchange on a Debian- and groupware-related mailing list about such matters, just to see what people interested in groupware projects might think, also attempting to find out what they use for calendaring themselves. (Unfortunately, there doesn’t seem to be so many non-product-specific, public and open places to discuss matters such as this one. Search mail software lists for calendaring discussions and you may even get to see hostility towards anyone mentioning groupware.) Ultimately, to keep the discussion concrete, I decided to announce informally what I have been working on.

Introducing imip-agent

imip-agent logoCalendaring and distributed scheduling can be achieved over e-mail using the iMIP standard. My work relies on this standard to function, providing programs that are integrated in mail transfer agents (MTAs) acting as calendaring agents. Thus, I decided to call the project imip-agent.

Initially, and as noted previously, my interest in such matters started with the mail handling functionality of Kolab and the component called Wallace that is responsible for responding to requests sent to certain e-mail addresses. Meanwhile, Kolab provided (and maybe still provides) a rather inelegant way of preparing “free/busy” information describing the availability of calendar system participants: a daemon program would run periodically, scanning mailboxes for events stored in special folders, and generate completely new manifests of each user’s schedule. (This may have changed since I last looked at Kolab in any serious manner.)

It occurred to me that the exchange of messages between participants in a scheduling transaction should be sufficient to maintain a live record of each participant’s availability, and that some experimentation would demonstrate the feasibility or infeasibility of such an approach. I had already looked into how existing architectures prepare and consume free/busy information, and felt that I had enumerated the relevant essentials for a viable calendaring architecture based on e-mail exchanges alone.

And so I set about learning about mail handling programs and expanding my existing knowledge of calendar-related standards. Fortunately, my work trying to get Kolab configured in a nice way didn’t go entirely to waste after all, although I also wanted to support different MTAs and not use convoluted Postfix-specific integration mechanisms, and so had to read up about more convenient and approachable mechanisms that other systems use to integrate with mail pipelines without trying hard to be all “high performance” about it. And I also wanted to make it possible for people to adopt a solution that didn’t force them to roll out LDAP in a scary “cross your fingers and run this script” fashion, even if many organisations already rely on LDAP and are comfortable with it.

The resulting description of this work is now available on the Web, and an attempt has been made to document the many different aspects of development, deployment and integration. Naturally, it is a work in progress and not a finished product: one step on the road to hopefully becoming a dependable solution involves packaging for Free Software distributions, which would result in the effort currently required to configure the software being minimised for the person setting it up. But at the same time, the mechanisms for integration with other systems (such as mail, mailboxes and Web servers) still need to be documented so that such work may have a chance to proceed.

Why Bother?

For various reasons unrelated to the work itself, it has taken a bit longer to get to this point than previously anticipated. But the act of making it available is, for me, a very necessary part of what I regard as a contribution to a kind of conversation about what kinds of software and solutions might work for certain groups of people, touching upon topics like how such solutions might be developed and realised. For instance, the handling of calendar data, although already supported by various Python libraries, hasn’t really led to similar Python-based solutions being developed as far as I can tell. Perhaps my contribution can act as an encouragement there.

There are, of course, various Python-based CalDAV servers, but I regard the projects around them to be somewhat opaque, and I perceive a common tendency amongst them to provide something resembling a product that covers some specific needs but then leaves those people deploying that product with numerous open-ended questions about how they might address related needs. I also wonder whether there should be more library sharing between these projects for more than basic data interpretation, but I know that this is quite difficult to achieve in practice, even if these projects should be largely functionally identical.

With such things forming the background of Free Software groupware, I can understand why some organisations are pitching complete solutions that aim to do many things. But here, in certain regards, I perceive a lack of opportunity for that conversation I mentioned above: there’s either a monologue with the insinuation that some parties know better than others (or worse, that they have the magic formula to total market domination) or there’s a dialogue with one side not really extending the courtesy of taking the other side’s views or contributions seriously.

And it is clear that those wanting to use such solutions should also be part of a conversation about what, in the end, should work best for them. Now, it is possible that organisations might see the benefit in the incremental approach to improving their systems and services that imip-agent offers. But it is also possible that there are also organisations who will contrast imip-agent with a selection of all-in-one solutions, possibly being dangled in front of them on special terms by vendors who just want to “close the deal”, and in the comparison shopping exercise that ensues, they will buy into the sales pitch of one of those vendors.

Without a concerted education exercise, that latter group of potential users are never likely to be a serious participant in our conversations (although I would hope that they might ultimately see sense), but the former group of potential users should be most welcome to participate in our conversations and thus enrich the wealth of choices and options that we should be offering. They would, I hope, realise that it is not about what they can get out of other people for nothing (or next to nothing), but instead what expertise and guidance they can contribute so that they and others can benefit from a sustainable and durable solution that, above all else, serves them and their needs and interests.

What Next?

Some people might point out that calendaring is only a small portion of what groupware is, if the latter term can even be somewhat accurately defined. This is indeed true. I would like to think that Free Software projects in other domains might enter the picture here to offer a compelling, broader groupware alternative. For instance, despite the apparent focus on chat and real-time communications, one doesn’t hear too much about one of the most popular groupware technologies on the Web today: the wiki. When used effectively, and when the dated rhetoric about wikis being equivalent to anarchy has been silenced by demonstrating effective collaborative editing and content management techniques, a wiki can be a potent tool for collaboration and collective information management.

It also turns out that Free Software calendar clients could do with some improvement. Their deficiencies may be a product of an unfortunate but fashionable fascination with proprietary mail, scheduling and social networking services amongst the community of people who use and develop Free Software. Once again, even though imip-agent seeks to provide only basic functionality as a calendar client, I hope that such functionality may inform or, at the very least, inspire developers to improve existing programs and bring them up to the expected levels of functionality.

Alongside this work, I have other things I want (and need) to be looking at, but I will happily entertain enquiries about how it might be developed further or deployed. It is, after all, Free Software, and given sufficient interest, it should be developed and improved in a collaborative fashion. There are some future plans for it that I take rather seriously, but with the privileges or freedoms granted in the licence, there is nothing stopping it from having a life of its own from now on.

So, if you are interested in this kind of solution and want to know more about it, take a look at the imip-agent site. If nothing else, I hope that it reminds you of the importance of independently-developed solutions for communication and the value in retaining control of the software and systems you rely on for that communication.

Monday, 16 November 2015

An actual user reports on his use of my parallel processing library

Paul Boddie's Free Software-related blog » English | 22:34, Monday, 16 November 2015

A long time ago, when lots of people complained all the time about how it was seemingly “impossible” to write Python programs that use more than one CPU or CPU core, and given that Unix has always (at least in accessible, recorded history) allowed processes to “fork” and thus create new ones that run the same code, I decided to write a library that makes such mechanisms more accessible within Python programs. I wasn’t the only one thinking about this: another rather similar library called processing emerged, and the Python core development community adopted that one and dropped it into the Python standard library, renaming it multiprocessing.

Now, there are a few differences between multiprocessing and my own library, pprocess, largely because my objectives may have been slightly different. First of all, I had been inspired by renewed interest in the communicating sequential processes paradigm of parallel processing, which came to prominence around the Transputer system and Occam programming language, becoming visible once more with languages like Erlang, and so I aimed to support channels between processes as one of my priorities. Secondly, and following from the previous objective, I wasn’t trying to make multithreading or the kind of “shared everything at your own risk” model easier or even possible. Thirdly, I didn’t care about proprietary operating systems whose support for process-forking was at that time deficient, and presumably still is.

(The way that the deficiencies of Microsoft Windows, either inherently or in the way it is commonly deployed, dictates development priorities in a Free Software project is yet another maddening thing about Python core development that has increased my distance from the whole endeavour over the years, but that’s a rant for another time.)

User Stories

Despite pprocess making it into Debian all by itself – or rather, with the efforts of people who must have liked it enough to package it – I’ve only occasionally had correspondence about it, much of it regarding the package falling out of Debian and being supported in a specialised Debian variant instead. For all the projects I have produced, I just assume now that the very few people using them are either able to fix any problems themselves or are happy enough with the behaviour of the code that there just isn’t any reason for them to switch to something else.

Recently, however, I heard from Kai Staats who is using Python for some genetic programming work, and he had found the pprocess and multiprocessing libraries and was trying to decide which one would work best for him. Perhaps as a matter of chance, multiprocessing would produce pickle errors that he found somewhat frustrating to understand, whereas pprocess would not: this may have been a consequence of pprocess not really trying very hard to provide some of the features that multiprocessing does. Certainly, multiprocessing attempts to provide some fairly nice features, but maybe they fail under certain overly-demanding circumstances. Kai also noted some issues with random number generators that have recently come to prominence elsewhere, interestingly enough.

Some correspondence between us then ensued, and we both gained a better understanding of how pprocess could be applied to his problem, along with some insights into how the documentation for pprocess might be improved. Eventually, success was achieved, and this article serves as a kind of brief response to his conclusion of our discussions. He notes that the multiprocess model inhibits the sharing of global variables, almost as a kind of protection against “bad things”, and that the processes must explicitly communicate their results with each other. I must admit, being so close to my own work that the peculiarities of its use were easily assumed and overlooked, that pprocess really does turn a few things about Python programs on its head.

Two Tales of Concurrency

If you choose to use threads for concurrency in Python, you’ll get the “shared everything at your own risk” model, and you can have your threads modifying global variables as they see fit. Since CPython employs a “global interpreter lock”, the modifications will succeed when considered in isolation – you shouldn’t see things getting corrupted at the lowest level (pointers or references, say) like you might if doing such things unsafely in a systems programming language – but without further measures, such modifications may end up causing inconsistencies in the global data as threads change things in an uncoordinated way.

Meanwhile, pprocess also lets you change global variables at will. However, other processes just don’t see those changes and continue happily on their way with the old values of those globals.

Mutability is a key concept in Python. For anyone learning Python, it is introduced fairly early on in the form of the distinction between lists and tuples, perhaps as a curiosity at first. But then, when dictionaries are introduced to the newcomer, the notion of mutability becomes somewhat more important because dictionary keys must be immutable (or, at least, the computed hash value of keys must remain the same if sanity is to prevail): try and use a list or even a dictionary as a key and Python will complain. But for many Python programmers, it is the convenience of passing objects into functions and seeing them mutated after the function has completed, whether the object is something as transparent as a list or whether it is an instance of some exotic class, that serves as a reminder of the utility of mutability.

But again, pprocess diverges from this behaviour: pass an otherwise mutable object into a created process as an argument to a parallel function and, while the object will indeed get mutated inside the created “child” process, the “parent” process will see no change to that object upon seeing the function apparently complete. The solution is to explicitly return – or send, depending on the mechanisms chosen – changes to the “parent” if those changes are to be recorded outside the “child”.

Awkward Analogies

Kai mentioned to me the notion of a portal or gateway through which data must be transferred. For a deeper thought experiment, I would extend this analogy by suggesting that the portal is situated within a mirror, and that the mirror portrays an alternative reality that happens to be the same as our own. Now, as far as the person looking into the mirror is concerned, everything on the “other side” in the mirror image merely reflects the state of reality on their own side, and initially, when the portal through the mirror is created, this is indeed the case.

But as things start to occur independently on the “other side”, with things changing, moving around, and so on, the original observer remains oblivious to those changes and keeps seeing the state on their own side in the mirror image, believing that nothing has actually changed over there. Meanwhile, an observer on the other side of the portal sees their changes in their own mirror image. They believe that their view reflects reality not only for themselves but for the initial observer as well. It is only when data is exchanged via the portal (or, in the case of pprocess, returned from a parallel function or sent via a channel) that the surprise of previously-unseen data arriving at one of our observers occurs.

Expectations and Opportunities

It may seem very wrong to contradict Python’s semantics in this way, and for all I know the multiprocessing library may try and do clever things to support the normal semantics behind the scenes (although it would be quite tricky to achieve), but anyone familiar with the “fork” system call in other languages would recognise and probably accept the semantic “discontinuity”. One thing I’ve learned is that it isn’t always possible to assume that people who are motivated to use such software will happen to share the same motivations as I and other library developers may have in writing that software.

However, it has also occurred to me that the behavioural differences caused in programs by pprocess could offer some opportunities. For example, programs can implement transactional behaviour by creating new processes which may or may not return data depending on whether the transactions performed by the new processes succeed or fail. Of course, it is possible to implement such transactional behaviour in Python already with some discipline, but the underlying copy-on-write semantics that allow “fork” and pprocess to function make it much easier and arguably more reliable.

With CPython’s scalability constantly being questioned, despite attempts to provide improved concurrency features (in Python 3, at least), and with a certain amount of enthusiasm in some circles for functional programming and the ability to eliminate side effects in parts of Python programs, maybe such broken expectations point the way to evolved forms of Python that possibly work better for certain kinds of applications or systems.

So, it turns out that user feedback doesn’t have to be about bug reports and support questions and nothing else: it can really get you thinking about larger issues, questioning long-held assumptions, and even be a motivation to look at certain topics once again. But for now, I hope that Kai’s programs keep producing correct data as they scale across the numerous cores of the cluster he is using. We can learn new things about my software another time!

Quick start using Blender for video editing - fsfe | 18:53, Monday, 16 November 2015

Updated 2015-11-16 for WebM

Although it is mostly known for animation, Blender includes a non-linear video editing system that is available in all the current stable versions of Debian, Ubuntu and Fedora.

Here are some screenshots showing how to start editing a video of a talk from a conference.

In this case, there are two input files:

  • A video file from a DSLR camera, including an audio stream from a microphone on the camera
  • A separate audio file with sound captured by a lapel microphone attached to the speaker's smartphone. This is a much better quality sound and we would like this to replace the sound included in the video file.

Open Blender and choose the video editing mode

Launch Blender and choose the video sequence editor from the pull down menu at the top of the window:

Now you should see all the video sequence editor controls:

Setup the properties for your project

Click the context menu under the strip editor panel and change the panel to a Properties panel:

The video file we are playing with is 720p, so it seems reasonable to use 720p for the output too. Change that here:

The input file is 25fps so we need to use exactly the same frame rate for the output, otherwise you will either observe the video going at the wrong speed or there will be a conversion that is CPU intensive and degrades the quality. Also check that the resolution_percentage setting under the picture dimensions is 100%:

Now specify output to PNG files. Later we will combine them into a WebM file with a script. Specify the directory where the files will be placed and use the # placeholder to specify the number of digits to use to embed the frame number in the filename:

Now your basic rendering properties are set. When you want to generate the output file, come back to this panel and use the Animation button at the top.

Editing the video

Use the context menu to change the properties panel back to the strip view panel:

Add the video file:

and then right click the video strip (the lower strip) to highlight it and then add a transform strip:

Audio waveform

Right click the audio strip to highlight it and then go to the properties on the right hand side and click to show the waveform:

Rendering length

By default, Blender assumes you want to render 250 frames of output. Looking in the properties to the right of the audio or video strip you can see the actual number of frames. Put that value in the box at the bottom of the window where it says 250:

Enable AV-sync

Also at the bottom of the window is a control to enable AV-sync. If your audio and video are not in sync when you preview, you need to set this AV-sync option and also make sure you set the frame rate correctly in the properties:

Add the other sound strip

Now add the other sound file that was recorded using the lapel microphone:

Enable the waveform display for that sound strip too, this will allow you to align the sound strips precisely:

You will need to listen to the strips to make an estimate of the time difference. Use this estimate to set the "start frame" in the properties for your audio strip, it will be a negative value if the audio strip starts before the video. You can then zoom the strip panel to show about 3 to 5 seconds of sound and try to align the peaks. An easy way to do this is to look for applause at the end of the audio strips, the applause generates a large peak that is easily visible.

Once you have synced the audio, you can play the track and you should not be able to hear any echo. You can then silence the audio track from the camera by right clicking it, look in the properties to the right and change volume to 0.

Make any transforms you require

For example, to zoom in on the speaker, right click the transform strip (3rd from the bottom) and then in the panel on the right, click to enable "Uniform Scale" and then set the scale factor as required:

Render the video output to PNG

Click the context menu under the Curves panel and choose Properties again.

Click the Animation button to generate a sequence of PNG files for each frame.

Render the audio output

On the Properties panel, click the Audio button near the top. Choose a filename for the generated audio file.

Look on the bottom left-hand side of the window for the audio file settings, change it to the ogg container and Vorbis codec:

Ensure the filename has a .ogg extension

Now look at the top right-hand corner of the window for the Mixdown button. Click it and wait for Blender to generate the audio file.

Combine the PNG files and audio file into a WebM video file

You will need to have a few command line tools installed for manipulating the files from scripts. Install them using the package manager, for example, on a Debian or Ubuntu system:

# apt-get install mjpegtools vpx-tools mkvtoolnix

Now create a script like the following:

#!/bin/bash -e

# Set this to match the project properties

# Set this to the rate you desire:


NUM_FRAMES=`find ${PNG_DIR} -type f | wc -l`

png2yuv -I p -f $FRAME_RATE -b 1 -n $NUM_FRAMES \
    -j ${PNG_DIR}/%08d.png > ${YUV_FILE}

vpxenc --good --cpu-used=0 --auto-alt-ref=1 \
   --lag-in-frames=16 --end-usage=vbr --passes=2 \
   --threads=2 --target-bitrate=${TARGET_BITRATE} \
   -o ${WEBM_FILE}-noaudio ${YUV_FILE}

rm ${YUV_FILE}

mkvmerge -o ${WEBM_FILE} -w ${WEBM_FILE}-noaudio ${AUDIO_FILE}

rm ${WEBM_FILE}-noaudio

Next steps

There are plenty of more comprehensive tutorials, including some videos on Youtube, explaining how to do more advanced things like fading in and out or zooming and panning dynamically at different points in the video.

If the lighting is not good (faces too dark, for example), you can right click the video strip, go to the properties panel on the right hand side and click Modifiers, Add Strip Modifier and then select "Color Balance". Use the Lift, Gamma and Gain sliders to adjust the shadows, midtones and highlights respectively.

Information desks on Autumn events

FSFE Fellowship Vienna » English | 14:25, Monday, 16 November 2015

Unfortunately I didn’t manage to report on our most recent activities yet. We did not only have an information stall at the huge 2015 Game City fair in Vienna but on the Veganmania (aka Move) summer festival in Graz too. An other opportunity to inform the public was an information desk at the Big Brother Awards gala in the Rabenhof Theater organised by the data protection association Quintessenz. Yesterday we even participated at the Linux Presentation Day in Vienna again together with Spielend Programmieren. The last event was just a spontaneous first try. Therefore only few people found their way to the venue. Next time we need to promote the event in advance. This time I myself just got invited one or two days before.

Especially our Quintessenz seems to be a good cooperation partner for future ventures since they organise the Linux Wochen Wien. Privacy obviously has a strong connection to free software. Therefore, over all they are dedicated to software freedom too.

The Autumn events brought us a lot of attention from many people and we did give away lots of leaflets. We need to re-stock information material soon in order to be prepared for further events.

Sunday, 15 November 2015

I want to trust

softmetz' anglophone Free Software blog | 16:31, Sunday, 15 November 2015

Recently terrible things happened in Paris and Beiruth when soldiers of IS killed or badly injured several hundred human beings and brought fear over our society. Enough was, is and will be written about those events, so I’ll stop here. I want to talk about something, that has already jumped the shark a long time Read more »

Saturday, 14 November 2015

Migrating data from Windows phones - fsfe | 21:18, Saturday, 14 November 2015

Many of the people who have bought Windows phones seek relief sooner or later. Sometimes this comes about due to peer pressure or the feeling of isolation, in other cases it is the frustration of the user interface or the realization that they can't run cool apps like Lumicall.

Frequently, the user has been given the phone as a complimentary upgrade when extending a contract without perceiving the time, effort and potential cost involved in getting their data out of the phone, especially if they never owned a smartphone before.

When a Windows phone user does decide to cut their losses, they are usually looking to a friend or colleague with technical expertise to help them out. Personally, I'm not sure that anybody I would regard as an IT expert has ever had a Windows phone though, meaning that many experts are probably also going to be scratching their heads when somebody asks them for help. Therefore, I've put together this brief guide to help deal with these phones more expediently when they are encountered.

The Windows phones have really bad support for things like CalDAV and WebDAV so don't get your hopes up about using such methods to backup the data to any arbitrary server. Searching online you can find some hacks that involve creating a Google or iCloud account in the phone and then modifying the advanced settings to send the data to an arbitrary server. These techniques vary a lot between specific versions of the Windows Phone OS and so the techniques I've described below are probably easier.

Identify the Windows Live / Hotmail account

The user may not remember or realize that a Microsoft account was created when they first obtained the phone. It may have been created for them by the phone, a friend or the salesperson in the phone shop.

Look in the settings (Accounts) to find the account ID / email address. If the user hasn't been using this account, they may not recognize it and probably won't know the password for it. It is essential to try and obtain (or reset) the password before going any further, so start with the password recovery process. Microsoft may insist on sending a password reset email to some other email address that the user has previously provided or linked to their phone.

Extracting data from the phone

In many cases, the easiest way to extract the data is to download it from Microsoft rather than extracting it from the phone. Even if the user doesn't realize it, the data is probably all replicated in and so there is no further loss of privacy by logging in there to extract it.

Set up an IMAP mail client

An IMAP client will be used to download the user's emails (from the account they may never have used) and SMS.

Install Mozilla Thunderbird (IceDove on Debian), GNOME Evolution or a similar program on the user's PC.

Configure the IMAP mail client to connect to the account. Some clients, like Thunderbird, will automatically set up all the server details when you enter the account ID. For manual account setup, the details here may help.

Email backup

If the user was not using the account ID for email correspondence, there may not be a lot of mail in it. There may be some billing receipts or other things that are worth keeping though.

Create a new folder (or set of folders) in the user's preferred email account and drag and drop the messages from the Inbox to the new folder(s).

SMS backup

SMS backup can also be done through It is slightly more complicated than email backup, but similar.

  • In the Outlook email index page, look for the settings button and click Manage Categories.
  • Enable the Contacts and Photos categories with a tick in each of them.
  • Go back to the main Inbox page and look for the categories section on the bottom left-hand side of the screen, under the folder list. Click the Contacts category.
  • The page may now appear blank. That is normal.
  • On the top right-hand corner of the page, click the Arrange menu and choose Conversation.
  • All the SMS messages should now appear on the screen.
  • Under the mail folders list on the left-hand side of the page, click to create a new folder with a name like SMS.
  • Select all the SMS messages and look for the option to move them to a folder. Send them to the SMS folder you created.
  • Now use the IMAP mail client to locate the SMS folder and copy everything from there to a new folder in the user's preferred mail server or local disk.

Contacts backup

On the top left-hand corner of the email page, there is a chooser to select other applications. Select People.

You should now see a list of all the user's contacts. Look for the option to export them to Outlook and other programs. This will export them as a CSV file.

You can now import the CSV file into another application. GNOME Evolution has an import wizard with an option for Outlook file format. To load the contacts into a WebDAV address book, such as DAViCal, configure the address book in Evolution and then select it as the destination when running the CSV import wizard.

WARNING: beware of using the Mozilla Thunderbird address book with contact data from mobile devices and other sources. It can't handle more than two email addresses per contact and this can lead to silent data loss if contacts are not fully saved.

Calendar backup

Now go to the application chooser again and select the calendar application. Microsoft provides instructions to extract the calendar, summarised here:

  • Look for the Share button at the top somewhere and click it.
  • On the left-hand side of the page, click Get a link
  • On the right-hand side, choose Show event details to ensure you get a full calendar and then click Create underneath it.
  • Look for the link with a webcals prefix. If you are downloading with a tool like wget, change the scheme prefix to https. Fetch the file from this link and save it with an ics extension.
  • Inspect the ics calendar file to make sure it looks like real iCalendar data.

You can now import the ics file into another application. GNOME Evolution has an import wizard with an option for iCalendar file format. To load the calendar entries into a CalDAV server, such as DAViCal, configure the calendar server in Evolution and then select it as the destination when running the import wizard.

Backup the user's photos, videos and other data files

Hopefully you will be able to do this step without going through Try enabling the MTP or PTP mode in the phone and attach it to the computer using the USB cable. Hopefully the computer will recognize it in at least one of those modes.

Use the computer's file manager or another tool to simply backup the entire directory structure.

Reset the phone to factory defaults

Once the user has their hands on a real phone, it is likely they will never want to look at that Windows phone again. It is time to erase the Windows phone, there is no going back.

Go to the Settings and About and tap the factory reset option. It is important to do this before obliterating the account, otherwise there are scenarios where you could be locked out of the phone and unable to erase it.

Erasing may take some time. The phone will reboot and then display an animation of some gears spinning around for a few minutes and then reboot again. Wait for it to completely erase.

Permanently close the Microsoft account

Keeping track of multiple accounts and other services is tedious and frustrating for most people, especially with services that try to force the user to receive email in different places.

You can help eliminate user fatigue by helping them permanently close the account so they never have to worry about it again.

Follow the instructions on the Microsoft site.

At some point it will suggest certain actions you should take before closing the account, most can be ignored. One thing you should do is remove the link between the account ID and the phone. It is a good idea to do this as otherwise you may have problems erasing the device, if you haven't already done so. Before completely closing the account, also verify that the factory reset of the phone completed successfully.

Dispose of the Windows phone safely

If you can identify any faults with the phone, the user may be able to return it under the terms of the warranty. Some phone companies may allow the user to exchange it for something more desirable when it fails under warranty.

It may be tempting to sell the phone to a complete stranger on eBay or install a custom ROM on it. In practice, neither option may be worth the time and effort involved. You may be tempted to put it beyond use so nobody else will suffer with it, but please try to do so in a way that is respectful of the environment.

Putting the data into a new phone

Prepare the new phone with a suitable ROM such as Replicant or Cyanogenmod.

Install the F-Droid app on the new phone.

From F-droid, install the DAVdroid app. DAVdroid will allow you to quickly sync the new phone against any arbitrary CalDAV and WebDAV server to populate it with the user's calendar and contact / address book data.

Now is a good time to install other interesting apps like Lumicall, Conversations and K-9 Mail.

Friday, 13 November 2015

How much video RAM for a 4k monitor? - fsfe | 17:49, Friday, 13 November 2015

I previously wrote about my experience moving to a 4K monitor.

I've been relatively happy with it except for one thing: I found that 1GB video RAM simply isn't sufficient for a stable system. This wasn't immediately obvious as it appeared to work in the beginning, but over time I've observed that it was not sufficient.

I'm not using it for gaming or 3D rendering. My typical desktop involves several virtual workspaces with terminal windows, web browsers, a mail client, IRC and Eclipse. Sometimes I use vlc for playing media files.

Using the nvidia-settings tool, I observed that the Used Dedicated memory statistic would frequently reach the maximum, 1024MB. On a few occasions, X crashed with errors indicating it was out of memory.

After observing these problems, I put another card with 4GB video RAM into the system and I've observed it using between 1024 MB and 1300 MB at any one time. This leaves me feeling that people with only basic expectations for their desktop should aim for at least 2GB video RAM for 4k.

That said, I've continued to enjoy many benefits of computing with a 4K monitor. In addition to those mentioned in my previous blog, here are some things that were easier for me with 4K:

  • Using gitk to look through many commits on the master branch of reSIProcate and cherry-pick some things to the resiprocate-1.9 branch. gitk only used half the screen and I was able to use the right hand side of the screen to look at the code in an editor in more detail.
  • Simultaneously monitoring logs from two Android devices running Lumicall and a repro SIP proxy server in three terminal windows arranged side by side, up to 125 lines of text in each.
  • Using WebRTC sites in the Mozilla browser while having a browser console window, source code and SIP proxy logs all open at the same time, none of them overlapping.

You can do much of this with a pair of monitors, but there is something quite nice about doing it all on a single 4K screen.

Building teams around SIP and XMPP in Debian and Fedora - fsfe | 11:52, Friday, 13 November 2015

I've recently started a discussion on the Fedora devel mailing list about building a team to collaborate on RTC services (SIP, XMPP, TURN and WebRTC) for the Fedora community. We already started a similar team within Debian.

This isn't only for developers or package maintainers and virtually anybody with a keen interest in free software can help. Testing different softphones and putting screenshots on the wiki can help a lot (the Debian wiki already provides some examples). The site is not intended to be an advertisement for my web design skills and anybody with expertise in design would be very welcome to contribute.

Teamwork in this endeavor can provide many benefits:

  • Sharing knowledge about RTC, for use within our communities and also for other communities using the free and open technology
  • Engaging with collaborators who are not involved in packaging teams, for example, the Debian RTC team has also had interest from upstream developers who are not on other Debian or Fedora mailing lists
  • Minimizing the effort required by the system administrators (the DSA team in Debian or Infrastructure team in Fedora) by triaging user problems and planning and testing any proposed changes.
  • Freeing up developer time to work on new features, such as the exciting work I'm doing on telepathy-resiprocate.

There are also many opportunities for project work that go beyond traditional packaging responsibilities. Wouldn't it be interesting to find ways to integrate the publish/subscribe capabilities of SIP and XMPP with the Fedmsg infrastructure?

Bringing XMPP to

We recently launched XMPP for and it would not be hard to replicate for users. Sure, some people are happy running their own XMPP servers. There are just as many people who prefer to focus on development and have something like XMPP provided for them.

With the strong emphasis on building a roster/buddy-list, XMPP can also help to facilitate long-term engagement in the community and users may identify more closely with the project.

I haven't offered XMPP on the trial service because it would be inconvenient for people to migrate buddy lists to the domain when the service is officially adopted.

Collaboration across communities

There are various other places where we can share knowledge between teams in different communities and people are invited to participate.

The Free-RTC mailing list is a great place to discuss free RTC strategies and initiatives.

The XMPP operators mailing list provides a forum to discuss operational issues in the XMPP space, such as keeping out the spammers.

Would you like to participate?

Please consider joining some of the mailing lists I've mentioned, replying to the thread on the Fedora devel mailing list, volunteering for the Debian RTC team or emailing me personally.

Sunday, 15 November 2015

My Report from FSCONS ’15

softmetz' anglophone Free Software blog | 16:31, Sunday, 15 November 2015

My daughter and me just arrived back from Gothenburg where we had attended FSCONS, the Free Society Conference and Nordic Summit.We went there to run FSFE’s booth and enjoy the wonderful city. It was a joyful weekend for the two of us. We arrived in Gothenburg on Thursday and spent the Friday at Alfons Åberg Read more »

Wednesday, 11 November 2015

Android Phone Review: The OnePlus X

Seravo | 18:05, Wednesday, 11 November 2015

OnePlus is a mobile phone manufacturer famous for selling the OnePlus One with pre-installed CyanogenMod, instead of a bloated custom Android as most manufacturers do. Their newest model OnePlus X was released on November 5th 2015 and after a few days of use, it seems to live up to its promises.

OnePlus X

OnePlus X

Most Chinese brands suffer from the lack of finishing and low quality. OnePlus is something completely different. Everything about OnePlus seems different. It is like a completely new generation. The One Plus website is exiting. Their marketing is based almost entirely on using social media – and in a good way! The device we ordered from China’s Silicon Valley Shenzhen arrived in just a few business days. The packaging had a premium feel to it and what was inside matched the expectations set by the hype on their website.

Packaging includes a silicon case for added protection

Packaging includes a silicon case for added protection.

Excellent craftsmanship with shiny glass-like panels in front and back, connected by a metallic bevel. A vivid display and a responsive and fast Android 5.1.1 experience inside. Dual SIM card capability, great camera and 3 GB of RAM are just some of the high-end technical features. There is no need to repeat those details, as they are already well presented at the original site. All we need to say is that those promises are true and the quality is unexpectedly good. It definitely competes with the other high-end mobile phones in the range of 500-700 euros. And at what price is the OnePlus X available? Only 269 € in Europe, including taxes and tolls.

The only drawbacks we noted are related to problems in Android itself. Everything in what OnePlus has added and customized is done with good judgement and is a step forward.

Very fast and snappy UI

Very fast and snappy UI


Inside is Android 5.1.1

Inside is Android 5.1.1

For a Finns like us it was also delightful to notice that their website is also available in Finnish, and that the language is actually flawless and not an amateur translation. Naturally the device operating system Android also has Finnish as an option.

Have any of you already experienced the OnePlus X? Feel free to share your thoughts in the comments!

Call for sessions at the FSFE assembly during 32C3

Don't Panic » English Planet | 11:32, Wednesday, 11 November 2015

From December 27-30 2015, there will be the 32nd Chaos Communication Congress (32C3) in the Congress Center of Hamburg where FSFE is happy to host an “assembly”. Such assemblies are community organised spaces inside the congress and the FSFE assembly … Continue reading

Tuesday, 10 November 2015

Aggregating tasks from multiple issue trackers - fsfe | 12:37, Tuesday, 10 November 2015

After my experiments with the iCalendar format at the beginning of 2015, including Bugzilla feeds from Fedora and reSIProcate, aggregating tasks from the Debian Maintainer Dashboard, Github issue lists and even unresolved Nagios alerts, I decided this was fertile ground for a GSoC student.

In my initial experiments, I tried using the Mozilla Lightning plugin (Iceowl-extension on Debian/Ubuntu) and GNOME Evolution's task manager. Setting up the different feeds in these clients is not hard, but they have some rough edges. For example, Mozilla Lightning doesn't tell you if a feed is not responding, this can be troublesome if the Nagios server goes down, no alerts are visible, so you assume all is fine.

To take things further, Iain Learmonth and I proposed a GSoC project for a student to experiment with the concept. Harsh Daftary from Mumbai, India was selected to work on it. Over the summer, he developed a web application to pull together issue, task and calendar feeds from different sources and render them as a single web page.

Harsh presented his work at DebConf15 in Heidelberg, Germany, the video is available here. The source code is in a Github repository. The code is currently running as a service at although it is not specific to Debian and is probably helpful for any developer who works with more than one issue tracker.

<video controls="" height="576" width="785">
<source src="" type="video/webm">
Your browser does not support the <video> tag.

Monday, 09 November 2015 RTC: announcing XMPP, SIP presence and more - fsfe | 07:57, Monday, 09 November 2015

Announced 7 November 2015 on the debian-devel-announce mailing list.

The Debian Project now has an XMPP service available to all Debian Developers. Your email identity can be used as your XMPP address.

The SIP service has also been upgraded and now supports presence. SIP and XMPP presence, rosters and messaging are not currently integrated.

The Lumicall app has been improved to enable rapid setup for SIP users.

This announcement concludes the maintenance window on the RTC services. All services are now running on jessie (using packages from jessie-backports).

XMPP and SIP enable a whole new world of real-time multimedia communications possibilities: video/webcam, VoIP, chat messaging, desktop sharing and distributed, federated communication are the most common use cases.

Details about how to get started and get support are explained in the User Guide in the Debian wiki. As it is a wiki, you are completely welcome to help it evolve.

Several of the people involved in the RTC team were also at the Cambridge mini-DebConf (7-8 November).

The password for all these real time communication services can be set via the LDAP control panel. Please note that this password needs to be different to any of your other existing passwords. Please use a strong password and please keep it secure.

Some of the infrastructure, like the TURN server, is shared by clients of both SIP and XMPP. Please configure your XMPP and SIP clients to use the TURN server for audio or video streaming to work most reliably through NAT.

A key feature of both our XMPP and SIP services is that they support federated inter-connectivity with other domains. Please try it. The FedRTC service for Fedora developers is one example of another SIP service that supports federation. For details of how it works and how we establish trust between domains, please see the RTC Quick Start Guide. Please reach out to other communities you are involved with and help them consider enabling SIP and XMPP federation of their own communities/domains: as Metcalfe's law suggests, each extra person or community who embraces open standards like SIP and XMPP has far more than just an incremental impact on the value of these standards and makes them more pervasive.

If you are keen to support and collaborate on the wider use of Free RTC technology, please consider joining the Free RTC mailing list sponsored by FSF Europe. There will also be a dedicated debian-rtc list for discussion of these technologies within Debian and derivatives.

This service has been made possible by the efforts of the DSA team in the original SIP+WebRTC project and the more recent jessie upgrades and XMPP project. Real-time communications systems have specific expectations for network latency, connectivity, authentication schemes and various other things. Therefore, it is a great endorsement of the caliber of the team and the quality of the systems they have in place that they have been able to host this largely within their existing framework for Debian services. Feedback from the DSA team has also been helpful in improving the upstream software and packaging to make them convenient for system administrators everywhere.

Special thanks to Peter Palfrader and Luca Filipozzi from the DSA team, Matthew Wild from the Prosody XMPP server project, Scott Godin from the reSIProcate project, Juliana Louback for her contributions to JSCommunicator during GSoC 2014, Iain Learmonth for helping get the RTC team up and running, Enrico Tassi, Sergei Golovan and Victor Seva for the Prosody and prosody-modules packaging and also the Debian backports team, especially Alexander Wirt, helping us ensure that rapidly evolving packages like those used in RTC are available on a stable Debian system.

Sunday, 08 November 2015

Problems observed during Cambridge mini-DebConf RTC demo - fsfe | 09:16, Sunday, 08 November 2015

A few problems were observed during the demo of RTC services at the Cambridge mini-DebConf yesterday. As it turns out, many of them are already documented and solutions are available for some of them.

Multiple concurrent SIP registrations

I had made some test calls on Friday using and I still had the site open in another tab in another browser window. When people tried to call me during the demo, both tabs were actually ringing but only one was visible.

When a SIP client registers, the SIP registration server sends it a list of all other concurrent registrations in the response message. We simply need to extend JSCommunicator to inspect the response message and give some visual feedback about other concurrent registrations. Issue #69. SIP also provides a mechanism to clear concurrent registrations and that could be made available with a button or configuration option too (Issue #9).

Callee hears ringing before connectivity checks completed

The second issue during the WebRTC demo was that the callee (myself) was alerted about the call before the ICE checks had been performed. The optimal procedure to provide a slick user experience is to run the connectivity checks before alerting the callee. If the connectivity checks fail, the callee should never be alerted with a ringing sound and would never know somebody had tried to call. The caller would be told that the call was unable to be attempted and encouraged to consider trying again on another wifi network.

RFC 5245 recommends that connectivity checks should be done first but it is not mandatory. One reason this is problematic with WebRTC is the need to display the pop-up asking the user for permission to share their microphone and webcam: the popup must appear before connectivity checks can commence. This has been discussed in the JsSIP issue tracker.

Non-WebRTC softphones, such as Lumicall, do the connectivity checks before alerting the callee.

Dealing with UDP blocking

It appears the corporate wifi network in the venue was blocking the UDP packets so the connectivity checks could never complete, not even using a TURN server to relay the packets.

People trying to use the service on home wifi networks, in small offices and mobile tethering should not have this problem as these services generally permit UDP by default.

Some corporate networks, student accommodation and wifi networks in some larger hotels have blocked UDP and in these cases, additional effort must be made to get through the firewall.

The TURN server we are running for also supports a TLS transport but it simply isn't configured yet. At the time we originally launched the WebRTC service in 2013, the browsers didn't support TURN over TLS at all but now they do. This is probably the biggest problem encountered during the demo but it does not require any code change to resolve this, just configuration, so a solution is well within reach.

During the demo, we worked around the issue by turning off the wifi on my laptop and using tethering with a 4G mobile network. All the calls made successfully during the demo used the tethering solution.

Add a connectivity check timeout

The ICE connectivity checks appeared to keep running for a long time. Usually, if UDP is not blocked, the ICE checks would complete in less than two seconds. Therefore, the JavaScript needs to set a timeout between two and five seconds when it starts the checks and give the user a helpful warning about their network problems if the timeout is exceeded. Issue #73 in JSCommunicator.

While these lengthy connectivity checks appear disappointing, it is worth remembering that this is an improvement over the first generation of softphones: none of them made these checks at all, they would simply tell the user the call had been answered but audio and video would only be working in one direction or not at all.

Microphone issues

One of the users calling into the demo, Juliana, was visible on the screen but we couldn't hear her. This was a local audio hardware issue with her laptop or headset. It would be useful if the JavaScript could provide visual feedback when it detects a voice (issue #74) and even better, integrating with the sound settings so that the user can see if the microphone is muted or the gain is very low (issue #75).

Thanks to participants in the demo

I'd like to thank all the participants in the demo, including Juliana Louback who called us from New York, Laura Arjona who called us from Madrid, Daniel Silverstone who called from about three meters away in the front row and Iain Learmonth who helped co-ordinate the test calls over IRC.

Thanks are also due to Steve McIntyre, the local Debian community, ARM and the other sponsors for making another mini-DebConf in the UK this year.

Thursday, 05 November 2015

Fixing black screen after login in Ubuntu 14.04

Seravo | 08:42, Thursday, 05 November 2015

How to fix black screen after login in Ubuntu 14.04?

(Ohje suomeksi lopussa.)

A lot of Linux-support customers have contacted us recently asking to fix their Ubuntu laptops and workstations that suddently stopped working. The symptom is that after entering the username and password in the login screen, they are unable to get in. Instead they see a flickering screen that then goes all black for a while, and then returns back to the login screen.

This problem is caused by an update that didn’t install cleanly and left the graphical desktop environment in a broken state.

The fix is to open a text console by pressing Ctrl+Alt+F1 and then logging in in text mode. Once in, issue these commands to complete the upgrade successfully:

sudo dpkg --configure -a
sudo apt-get update
sudo apt-get upgrade -y

Finally run sudo reboot, Ubuntu restarts, and then you can log in again normally.


Sisäänkirjautumisen jälkeen näkyvän mustan ruudun korjaaminen Ubuntu 14.04:ssä

Useat asiakkaat ovat viime päivinä ottaneet meihin yhteyttä tilatakseen tukea Ubuntu-läppärin tai -työaseman korjaukseen, kun se yllättäen lakkasi toimimasta oikein. Oire on, että sisäänkirjautumisessa, käyttäjätunnuksen ja salasanan syöttämisen jälkeen ruutu vilkkuu ja on hetken musta. Tämän jälkeen näyttö tulee takaisin kirjautumisnäkymään.

Ongelma johtuu Ubuntun päivityksestä, joka on epäonnistunut ja jättänyt graafisen työpöytäympäristön toimimattomaan tilaan.

Korjauksen voi tehdä itse avaamalla tekstipäätteen painamalla Ctrl+Alt+F1 ja kirjautumalla sisään tekstitilassa. Sen jälkeen voi ajaa päivityksen loppuun onnistuneesti komentamalla:

sudo dpkg --configure -a
sudo apt-get update
sudo apt-get upgrade -y

Lopuksi aja sudo reboot. Ubuntun uudelleenkäynnistymisen jälkeen sisäänkirjautuminen ja käyttö pitäisi onnistua normaalisti.

Wednesday, 04 November 2015

Freed-ora 23 Workstation available

Marcus's Blog | 09:51, Wednesday, 04 November 2015

Yesterday, Fedora 23 has been released. In line with this, I have prepared a Freed version, which includes linux-libre kernel and Icecat webbrowser. All proprietary firmware has been removed.

If you want to give it a try, feel free to download the ISO image. The sha256 checksum can be found here.

The image is based on the Fedora 23 Workstation release which contains GNOME as default desktop environment. The Freed-ora repository has been added already for your convenience. Thanks to Alexandre Oliva for his great work on the repo.

Just in case you are curious about how it’s build, please take a look at the freed-ora.ks file. You can also modify it, according to your needs, e.g. create a Spin based on the Xfce version of Fedora. The build itself can be started with:

livecd-creator --verbose --config=freed-ora.ks --fslabel=Freed-ora-23-x86_64 --cache =/var/cache/live

Besides that, I want to thank for his help on building the live image.

Tuesday, 03 November 2015

How much of Linux will be illegal in the UK? - fsfe | 19:13, Tuesday, 03 November 2015

This week I've been in the UK again, giving a talk about Lumicall and JSCommunicator in Manchester last night and a talk about Free Real-Time Communications at the mini-DebConf in Cambridge on the weekend of 7-8 November.

An interesting backdrop to these activities has been a national debate about Internet privacy. The UK Government and police are demanding laws to mandate back doors in all communications products and services.

It leaves me wondering about a range of issues:

  • Will overzealous UK police, reknowned for singling out and bullying people who don't conform with their idea of normality, start taking a more sinister attitude to people using software like Linux? For example, if airport security asks to inspect a laptop and doesn't see the familiar Windows or Mac OS desktop, will the owner of the laptop be delayed or told to leave it behind? Some people may feel this is extreme, but workers in these roles are known for taking initiative in their own special way, such as the infamous baby pat-down. If the owner of a Linux laptop is a Muslim, like the Texas schoolboy recently arrested because his clock looks suspicious to the untrained eye of a policeman, the chances of a rough encounter with authority probably rise even further.
  • Will developers still be able to use technologies like PGP and ZRTP in the UK? Will PGP key-signing parties become illegal or have to be held 20 miles offshore on a boat like the legendary pirate radio stations of the sixties?
  • Will Linux distributions such as Debian and Fedora have to avoid distributing packages such as Enigmail?
  • Will updates to Android and iOS on smartphones seek to automatically disable or remove apps like Lumicall?
  • Even if a user chooses a secure app like Lumicall for communication, will the vendor of the operating system be required to provide alternative ways to monitor the user, for example, by intercepting audio before it is encrypted by the app?
  • Without strong encryption algorithms, digital signatures will no longer be possible either and it will be impossible for software vendors to securely distribute new versions of their software.
  • Why should the police be the only workers to have their job done for them by Internet snooping? Why shouldn't spouses have a right to all their partner's communications to periodically verify they are not cheating and putting themselves at risk of diseases? Why shouldn't employers be able to check on employee's private communications and home computers to help prevent leaks of customer data? Why shouldn't the NHS be able to go through people's garbage to monitor what they eat given the WHO warning that bacon is more likely to kill you than a terrorist?
  • While the authorities moan about the internet being a "safe" place for terrorists and paedophiles, what is their real motivation for trying to bring in these new laws, even when their best technical advisors must surely be telling them about the risks and negative consequences for compatibility of UK systems in a global Internet? If the terrorist scare story is not so credible, is it more likely they are seeking to snoop on people who may not be paying taxes or to maintain the upper hand over rival political parties like the Greens and the UKIP in a time of prolonged and increasingly punitive austerity?
  • Australia already introduced similar laws a few weeks ago, despite widespread criticism from around the world. With cricket and rugby now over, is the UK just looking to go one up on Australia in the game of snooping?

Island mentality in the Internet age

Politics aside, what would this mean from a technical perspective? The overwhelming consensus among experts is that secure technology that people use and expect in many other parts of the world, including the US, simply won't be compatible with the products and services that UK residents will be permitted to use. Bigger companies like Google and Apple may be able to offer differentiated versions of their services for the UK but smaller companies or companies who have built their reputation on technical excellence simply won't be able or willing to offer crippled versions of their products with backdoors for the UK. The UK's island geography will become a metaphor for its relationship with the global marketplace.

The first thing to take note of is that encryption and authentication are closely related. Public-key cryptography, for example, simply swaps the public key and private key when being used to authenticate instead of encrypt. An effective and wide-reaching legal ban on encryption would also potentially prohibit the algorithms used for authentication.

Many methods of distributing software, including packages distributed through Linux distributions or apps distributed through the Google Play store are authenticated with such algorithms. This is often referred to as a digital signature. Digital signatures help ensure that software is not corrupted, tampered with by hackers or infected by viruses when it is transmitted and stored in the public Internet.

To correctly implement these mechanisms for installing software safely, every device running an operating system such as Debian, Ubuntu, Fedora or Android needs to include some software modules implementing the algorithms. In Linux, for example, I'm referring to packages like GnuPG, OpenSSL and GnuTLS. Without these components, it would be hard or even impossible for developers in the UK to contribute or publish new versions of their software. Users of the software would not be able to securely receive vital updates to their systems.

An opportunity for free software?

Some people say that any publicity can be good publicity. Now the Government has put the ball into play, people promoting secure solutions based on free software have an opportunity to participate in the debate too.

While laws may or may not change, principles don't. It is a perfect time to remind users that many of the principles of software freedom were written down many years ago, before the opportunity for mass surveillance came into existence. These principles remain relevant to this day. The experts who developed these principles back then are also far more likely to offer insights and trustworthy solutions for the road ahead.

If you'd like to discuss these issues or ask questions, please join the Free-RTC mailing list.

Podcast on non-military use clause in GNU GPL

I LOVE IT HERE » English | 10:27, Tuesday, 03 November 2015

Last week Tuesday I participated in a German podcast by CCC’s board member Peter Hecko about a non-military clause in the GNU GPL, or other usage restrictions in Free Software licenses. At the FSF 30 birthday party I talked with Peter about Thorsten Schröder’s talk “Free Software against our freedom” at the Chaos Communication Congress, and he invited me for this podcast.

A tank

If you understand German, you can now listen to the podcast. Else my main arguments in a nutshell were: “military” is really difficult to define, it is questionable if someone who kills people would stick to a copyright license, or if it would help anything if military is not allowed to use the Free Software. Furthermore I explained that in the Free Software movement—which is a worldwide movement—we have many different value systems. While some values are shared more widely, there are others people disagree on. We would end up with hundreds of licenses or license additions. We already have way too much licenses at the moment, but such usage restrictions would make it almost impossible to develop software together. By using such restrictions you would also make it hard for everybody who wants to do “good” things with software.

Most of the arguments from the podcast are covered in Richard’s article “Why programs must not limit the freedom to run them” which is translated into several languages.

Monday, 02 November 2015

FOSDEM 2016 Real-Time Communications dev-room and lounge - fsfe | 10:29, Monday, 02 November 2015

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2016 takes place 30-31 January 2016 in Brussels, Belgium.

This call-for-participation contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (including the Saturday night dinner),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 30 January 2016 in room K.3.401. The lounge will be present for both days in building K.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

Speaking opportunities

Note: if you used Pentabarf before, please use the same account/username

Main track: the deadline for main track presentations was midnight on 30 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

Real-Time Communications dev-room: deadline 27 November. Please also use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom".

Other dev-rooms: some speakers may find their topic is in the scope of more than one dev-room. It is permitted to apply to more than one dev-room but please be kind enough to tell us if you do this. See the full list of dev-rooms.

Lightning talks: deadline 27 November. The lightning talks are an excellent opportunity to introduce a wider audience to your project. Given that dev-rooms are becoming increasingly busy, all speakers are encouraged to consider applying for a lightning talk as well as a slot in the dev-room. Pentabarf system to submit a lightning talk proposal. On the "General" tab, please look for the "Track" option and choose "Lightning Talks".

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

For any questions, please join the FSFE-sponsored Free RTC mailing list.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 30 January
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation to other mailing lists

FOSDEM is made possible by volunteers and if you have time to contribute, please feel free to get involved.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 28 and 29 January 2016. Please see the XSF Summit 19 wiki and join the mailing list to discuss.

We are also considering a more general RTC or telephony summit, potentially on 29 January. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 29 January

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email. If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Please also link to the Planet sites from your own blog or web site.


For discussion and queries, please subscribe to the Free-RTC mailing list.

The dev-room administration team:

Why you should avoid autoboot

Marcus's Blog | 08:12, Monday, 02 November 2015

Recently I have discovered a new project called Autoboot which is a fork of Libreboot (including website design) that adds Blobs again.

The idea of Libreboot is to offer a fully Free (as in speech) BIOS, avoiding any blobs. This is why proprietary code like the Intel AMT blobs or the CPU microcode has been removed. The disadvantage of this removal is that only a limited number of boards is supported at the moment (see hardware compatibility list). The Libreboot team is working hard to continuously increase the number of supported devices, but there are some limitations, as removal of the AMT code in post 2008 Lenovo machines is a difficult task (might even be impossible). Most recent additions have been some AMD based mainboards which do not have those kind of limitations.

Autoboot makes use of the Libreboot build system (which is of course available under a Free License) and skips the Blob removal in order to support a larger number of devices. If your Freedom is important to you, you should not use this BIOS versions. If you are a developer, who is interested in User Freedom, you should join the Libreboot project in order to support more hardware. Upstream development happens in Coreboot and this is where nearly all Libreboot patches are landing as well.

Friday, 30 October 2015

Awesome apps and tools

Hugo - FSFE planet | 16:02, Friday, 30 October 2015

Here some little known, yet awesome apps or tools that I use. Thanks to the people working on these (I’m glad to have met some of them, and they’re awesome too)!

<aside class="toc">



Transportr is an Android app to help you use public transports systems. It’s simply the best one I’ve seen, and it supports a lot of systems (city-wide like Berlin or Paris and even long-distance).


Feedbin is an RSS web reader. It provides a pleasing reading experience and you can easily browse through items and share links. If you’re looking to host it yourself, have a look at the sources.


ikiwiki powers this blog, hosted by branchable. If you like git and markdow, and editing your texts with your favourite text editor, this is for you.


Known (formerly “idno”) is more “socially aware” than ikiwiki. It runs with PHP and it’s basically your easy-to-run indieweb space. If you use it with you will enjoy a nice integration with twitter and other silos (see an example of my own).


YunoHost is custom debian distribution aiming at making self-hosting easy. It provides a nice web interface for administration of your self-hosted server and for users of the web server. If you have basic linux administration skills, this will be very helpful.


Pinboard a simple and efficient bookmarking app that also archives the content of marked pages (if you pay for it).1


Sharesome lets you easily share files on the web. It has a pleasant interface that works well on all devices I have tested so far. It’s also available as a web app. The neat feature is that you can choose where to host your data (for instance, with remotestorage; you can get an account at

Terms of Service; Didn’t Read

Some shameless self-promo with ToSDR, the app that tells you what happens to your rights online by rating and summarising Terms of service and privacy policies. You can also get it directly in your web browser or as a web app.

If you’re looking for a curated list of awesome web services that are free of charge and based on free software and open data, look no further than Jan’s Libre projects.

<style type="text/css">.pagedate.published {display: none;}</style>

  1. Unfortunately, Pinboard is not released as free software. But you can export your bookmarks. ↩

A little adventure with Haskell and Go

Computer Floss | 10:07, Friday, 30 October 2015

I recently decided to brush up on my functional programming skills. My day job increasingly involves building and operating large-scale distributed systems and I became interested in the intersection between this and functional programming.

There are all sorts of reasons why using a functional programming language to build distributed systems is beneficial, but I won't go into details right now (maybe in a future post). For the moment, I'll relate what happened when I decided to brush up on my functional knowledge and then compare it to the imperative style.

I had learned Haskell in my university days as a young, wide-eyed undergrad, so I decided to start there and revisit it. To get back in the saddle, I decided to write a simple -- no, wait -- a very simple task manager.

Brushing up then going further

While developing the Haskell-based task manager (which I'd christened "Hask Manager"), I was reminded why the functional style appealed to me so much. Being a computer scientist by training, I see it as being closer to the theoretical underpinnings of computer science than the more popular imperative style is. I also believe the functional style brings numerous important benefits over the imperative style.

Detour: Remind me - what's functional programming?

In days gone by, when dinosaurs roamed the earth, these dinosaurs programmed in languages like assembly, Fortran or C. Writing an algorithm in languages like these is a bit like writing a recipe: explicitly prescribing each little step and each change in state to build up a whole series of steps (to be executed strictly in order!) and organise them into blocks called subroutines or functions. This is known as imperative programming and most of the popular languages in history have been imperative.

An imperative approach to doubling a list of numbers would look something like this (in Go):

// returns [2, 4, 6, 8, 10]
func getDoubles() []int {
	doubles := make([]int, 0)
	for _, n := range []int{1, 2, 3, 4, 5} {
		doubles = append(doubles, n * 2)
	return doubles

But some other dinosaurs took a different approach called functional programming. Their approach was built around the notion of functions, although unlike "functions" from other languages like C (which are really procedures), their functions were based on the mathematical idea of a function: an expression that has no state or mutable variables, and has no side effects when run. Every time you execute a function with the same set of arguments in such a language, the result is always the same. These ideas developed into languages like LISP and Scheme.

A functional approach to doubling a list of numbers would look something like this (in Haskell):

-- returns [2, 4, 6, 8, 10]
getDoubles = map (* 2) [1, 2, 3, 4, 5]

Today, there's not such a hard distinction between functional and imperative languages. As time goes on, imperative languages incorporate more and more ideas from the functional approach.

Back to the story...

At this point, I was only playing. I had no grand aim in mind for this little side project and so it grew organically as I continued to play. Once the Haskell task manager was up and running, I decided to try and compare the functional style of Haskell with a suitable non-functional language. I decided to rewrite Hask Manager using an imperative language instead. For this, I chose the Go language (Google's flagship programming language) and this version of the task manager I called "Go Get Things Done".

Go logo by Renée French

Go logo by Renée French

I had no strong reasons for choosing Go, but a few things made it feel suitable:

  • I'd been learning and using Go for about a year (I've made some contributions to Kubernetes and CoreOS, which are built using Go).
  • Not only is it imperative, but it avoids a lot of functional ideas (although it has taken a few here and there).
  • It's a young language which means its designers had a lot of existing software engineering wisdom to draw on. The Go authors have explicitly stated that they are quite ruthless in deciding whether a feature should be implemented in the language. If they've being choosing wisely, it should be a language with no outdated, legacy concepts. This makes it a "worthy" object of comparison.

What did I end up writing?

Both versions of the program are functionally the same. Nevertheless, I tried to stick to each language's best practices as best I could. The resulting program is really simple. It understands commands to maintain a task list, such as:

  • all
  • add <description> <date>
  • todo
  • today
  • show <id>
  • tick|untick <id>
  • quit

Both versions are available on my GitHub account:

What are these STOPs in the code?

As I was familiarising myself with Haskell, and later when I was comparing the two versions, I wrote comments in the code. I have a leaky memory, so I find writing stuff down is the best way for me learn and have it stick.

In case anyone else is interested in learning the basics of either language, or just looking through the code comparison, I put the phrase "STOP" in each comment and a number denoting the order in which I recommend you follow them. So, if you open the repo in an IDE and search for the phrase "STOP", you should get a nice convenient list of stops to go through.

Interesting notes

In no particular order, some observations I had as I was writing these programs.


I believe functional programming is harder and takes more investment to learn than the imperative style.

Experienced programmers who already know the imperative approach but not the functional will have a lot of assumptions and deeply embedded habits that will need overcoming.

But I suspect a novice would fare little better if they learned the functional style first. Functional programming involves some quite abstract mathematical thinking that is not highly intuitive for a beginner. Conversely, imperative programming is more about small concrete steps. All in all, a beginner can grasp an imperative language quicker and achieve something in it after less investment.

Finding mistakes

One thing that makes Haskell harder to learn is the abstract error messages which I found difficult to decipher when learning. However, once I got the hang of things, I found that the time spent understanding and correcting errors in my programs became much shorter.

I even got the feeling that, with experience, correcting errors in the Haskell code took less time than the Go code. In Haskell, once an error was understood, the fix was usually clear. In Go more time was spent running through a piece of faulty code procedurally, trying to work out the problem and trying multiple changes to fix it.

In fact, the rather unsettling "works after first attempt" (that frightening occurrence when the first draft of your code works without a problem) happened a few times after writing a Haskell function, something that I find happens a lot less often in Go (or any imperative language).

Functional as a prototype

I've read online a few examples of people developing a Haskell version of a program as a prototype before then re-implementing it in a more 'mainstream' language. (I've also read examples of people finding that their Haskell prototypes worked just fine, thank you, and subsequently junking plans to re-implement.)

If that were your desire (although I think Haskell is a perfectly suitable implementation language), I see the advantages of the approach. I noticed a few places where the Go version of the task manager took inspiration from the Haskell version and was an improvement over the version that I probably would have written.

One example is the commands.go file, which is a sort of backend for updating task data. I can imagine that I would have written the functions in there as less generic versions (e.g. GetTodaysTasks or GetUndoneTasks) had I not first written a Haskell version which, by the language's nature, encouraged me to write the functions generically. That backend turned out more like a database API, with a small number of generic functions (AddTask, QueryTask, UpdateTask etc.). This meant that the backend could stay concise and reuseable, and still allow the programmer to easily come up with an endless variety of different operations by filling in the blanks of the generic functions. However, the Go syntax makes this same functionality less readable than its Haskell counterpart.

// main.go
if strings.HasPrefix(input, "todo") {
	return gogtd.CmdQueryMany(
		func(tl *gogtd.TaskList) *gogtd.TaskList {
                        return tl.Filter(gogtd.IsPending)
// commands.go
func CmdQueryMany(f func(*TaskList) *TaskList) string {
	tasks := GetTasksFromFile()
	resultSet := f(tasks)
	return resultSet.String()
--- Main.hs
interpret input
    | "todo" `isPrefixOf` input = cmdQueryMany getPending
--- Commands.hs
cmdQueryMany :: (TaskList -> TaskList) -> IO String
cmdQueryMany f = do
    tasks <- getTasksFromFile
    return showTaskList(f(tasks))

Making Go more functional

Just because support for the functional style is limited in Go, doesn't mean you can't add a little bit yourself. For example, the standard functional primitives (map, filter and reduce/fold) may be missing, but with first-class functions in Go you can roll your own (see Filter in task_list.go).

func (tl *TaskList) Filter(f func(t *Task) bool) *TaskList {
	newTl := NewTaskList()
	for id, t := range tl.Tasks {
		if f(t) {
			newTl.Tasks[id] = t
	return newTl

Of course, this example not as powerful as filter from Haskell, because it can only be used to filter a TaskList (whereas filter in Haskell is generic).


The Haskell version is more concise, but this is hardly surprising. Functional programming is generally thought of as being more expressive, allowing you to achieve more per line of code.

Side effects in Haskell

Once you introduce a side effect into one function, it can seep into other parts of your program. Just by using such an "impure" function, another function allows side effects to influence its behaviour.

You can write functions with or without side effects in both Haskell and Go. A key difference between Haskell and a language like Go is that Haskell is explicit about which functions deal with side effects. Once a function has allowed side effects in, it must be marked as such, and that "bubbles up" any other functions that use it.

Safety in Haskell

Similarly to its strictness in dealing with side effects, Haskell is also strict when dealing with potential errors. If your function calls another function that might be unable to return a value (via a Maybe) or might return an error (via an Either), you must deal with it. There's no getting away with ignoring potential error values as there is in Go.


That's all for now. I hope in the future to do some more stuff along this theme... maybe writing a comparison that makes use of concurrency features, explaining in greater detail how functional programming works, or expanding on how it can be used in distributed systems.

Watch this space.

Wednesday, 11 November 2015

New Palmyra

Don't Panic » English Planet | 11:32, Wednesday, 11 November 2015

Palmyra was Syrias best known archeological site, influenced by ancient Greek, Roman and Persian arts and culture. Bassel Khartabil, Free Software developer, started with a 3d virtual reconstruction of Palmyra but is put in prison by the current regime of … Continue reading

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  I LOVE IT HERE » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes » » Free Software  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog