Planet Fellowship (en)

Saturday, 24 January 2015

My first LV2 plugin

tobias_platen's blog | 21:30, Saturday, 24 January 2015

My first LV2 plugin

Recently I wrote my first LV2 plugin. It’s an additive singing synthesizer similar to Madde by Svante Granqvist but it also uses the Excitation plus Resonance voice model that is used by VOCALOID. It runs in realtime and it can be controlled using a MIDI keyboard. But it can also act as a placeholder for the singing voice in an Ardour project when composing songs.

LV2 is a plugin standard for free software developers that allows decentralized extensibility. It is a replacment for the older LADSPA and DSSI plugin standards that are commonly used with older DAWs such as Rosegarden and LMMS. Unlike other well known plugin standards such as VST there are no licencing restrictions in LV2. There is a small program called Jalv which connects your LV2 plugins to Jack and makes testing easy. It is also possible to combine multiple plugins and applications using Patchage which allows you to make modular synthesizers.

The MaddeLOID plugin is part of my work on free virtual singer project which aims at building a free software replacement for both VOCALOID and UTAU. There are already some free programs such as jcadencii, vConnect-STAND and Sinsy, but most of them lack flexibility and support for other languages than Japanese. Therefore I started writing my own programs that fill the gap and improving existing ones where it should be done. New features such as Non-Session Management and Jack Transport are likely to be added to the QTau Editor, which currently lacks both a working synthesizer and a lyricizer. I dedided to use eSpeak as the speech synthesis backend, which does two different things. First words are converted to phonetic symbols, and in the second step the waveform is generated. Then WORLD is used to change the length of the notes and to apply vibrato and portamento. All of my programs and related documentation can be cloned from my gitorious.

Yourls URL Shortener for Turpial

Max's weblog » English | 01:58, Saturday, 24 January 2015

Maybe you know Yourls, a pretty cool URL shortener which you can set up on your own server very easily. Link shorteners are nice to have because

  1. you can share long links with short urls and
  2. you can view and organise all links you ever shared (incl. statistics and so on).

There are many alternatives like, and so on, but Yourls belongs to YOU and you don’t have to pay attention to ToS changes or the provider’s financial status. AND you can use whichever domain you own, for example in my case it’s

And maybe you also know Turpial, a Twitter client for GNU/Linux systems (I don’t like Twitter’s web page). Until lately I used Choqok, a KDE optimised client, but there were many things which annoyed me: No image previews, slow development, unconvenient reply behaviour and so on. And hey, why not trying something new? So I started to use Turpial which seems to solve all these critic points. Well, like always I missed some preferences to configure. But since it’s Free Software, one is able to look how the software works and to change it – and to share the improvements which I’ll do in the next step!

Turpial already offers some link shorteners but not Yourls. But we can add it manually. To do so, open the file /usr/lib/python2.7/dist-packages/libturpial/lib/services/url/shortypython/ as root. Now add the following somewhere between the already existing shorteners

# Yourls
class Yourls(Service):
	def shrink(self, bigurl):
		resp = request('http://YOUR_DOMAIN/yourls-api.php', {'action': 'shorturl', 'format': 'xml', 'url': bigurl, 'signature': 'YOUR_SIGNATURE'})
		returned_data =
		matched_re ='(http://YOUR_DOMAIN/[^"]+)', returned_data)
		if matched_re:
			raise ShortyError('Failed to shrink url')
yourls = Yourls()

Just replace YOUR_DOMAIN and YOUR_SIGNATURE accordingly. The usage of a signature enables you to hide your username and password when sending the shorten requests, like an API key and looks like f51qw35w6 (more about passwordlessAPI). You can retrieve your signature on your Yourls’ Admin page via “Tools”.

Then add the new service to the list of shorteners. In the same file search for services = { (on the bottom) and add somewhere in the following list:

'yourls-instance': yourls,

Well, then just restart Turpial, go to Preferences > Services and choose “yourls-instance” from the list of Short URL services. Congrats, you should be able to short your URLs with Yourls in Turpial now :)

Any problems or improvements? Drop me a message!


  • For me, only hardcoding the signature worked but not the prompt for these data like in some other services stated in the file
  • Another file worth your attention might be /usr/lib/python2.7/dist-packages/turpial/ui/qt/templates/style.css. There you can change colors, fonts and so on. For example, the “Ubuntu” font wasn’t installed on my system so I just chose Sans Serif instead.

Friday, 23 January 2015

Last exam finished ☺

Hook’s Humble Homepage | 11:15, Friday, 23 January 2015

Yesterday I passed my last exam on my alma mater – the Faculty of Law, University of Ljubljana.

With the International (Public) Law behind me, I have collected all the academic pokémon needed to level up, there are no more exams left between me and my thesis.

So next up: finishing my LL.M., which is about the FLA and already taking shape.


hook out → la di da, la di da di da daaaaaa ☺

Wednesday, 21 January 2015

FSFE’s assembly at Chaos Communication Congress (31C3)

Don't Panic » English Planet | 16:31, Wednesday, 21 January 2015

The Chaos Communication Congress has been held for the 31st time (“31C3″ in short) in the end of 2014 (December 27 to 30) and FSFE was present for the first time with a so-called assembly. An assembly at the Chaos … Continue reading

Brown Dogs and Barbers – “Could not have come at a better time, nor be better pitched”

Computer Floss | 09:37, Wednesday, 21 January 2015

The British Computer Society (BCS), the professional body for IT workers in the UK, was kind enough to publish a review of my book Brown Dogs and Barbers recently and gave it a roaringly good verdict – 9 out of 10. Here is a link to the review.

Of course, it’s very nice for someone to pay your work compliments, like being called “eloquent” and having an “easy, engaging style”. But there are other things in the review which I’m particularly pleased to read because they show that I’m achieving my goals for the book.

For instance, the reviewer agrees with me that the book is “aimed squarely at the intelligent layperson, it requires no prior expertise and sits within the genre of popular science.” I’m glad that I have managed to present these ideas in an understandable way that requires no background knowledge.

Furthermore, the reviewer recommends the book to target audiences that I also intended to shoot for: “IT professionals, teachers, parents and their teenage children will all find it an invaluable introduction to the key concepts and their practical application.” This is especially nice to read as I now know that the reviewer is in the field of education, working at a British school and active in the Computing at School BCS working group.

In the reviewer’s opinion (and mine too) Brown Dogs and Barbers is also a book that’s relevant to people already working in IT, stating: “If you have no background in computer science, this book will be a revelation. And if you think you know what computer science is about, this book will invoke connections you may never have considered before.”

If you’d like to read for yourself what prompted this review, you can order my book online at Smashwords or Amazon, where there are also samples to try before you buy.

Tuesday, 20 January 2015

Quantifying the performance of the Microserver - fsfe | 19:53, Tuesday, 20 January 2015

In my earlier blog about choosing a storage controller, I mentioned that the Microserver's on-board AMD SB820M SATA controller doesn't quite let the SSDs perform at their best.

Just how bad is it?

I did run some tests with the fio benchmarking utility.

Lets have a look at those random writes, they simulate the workload of synchronous NFS write operations:

rand-write: (groupid=3, jobs=1): err= 0: pid=1979
  write: io=1024.0MB, bw=22621KB/s, iops=5655 , runt= 46355msec

Now compare it to the HP Z800 on my desk, it has the Crucial CT512MX100SSD1 on a built-in LSI SAS 1068E controller:

rand-write: (groupid=3, jobs=1): err= 0: pid=21103
  write: io=1024.0MB, bw=81002KB/s, iops=20250 , runt= 12945msec

and then there is the Thinkpad with OCZ-NOCTI mSATA SSD:

rand-write: (groupid=3, jobs=1): err= 0: pid=30185
  write: io=1024.0MB, bw=106088KB/s, iops=26522 , runt=  9884msec

That's right, the HP workstation is four times faster than the Microserver, but the Thinkpad whips both of them.

I don't know how much I can expect of the PCI bus in the Microserver but I suspect that any storage controller will help me get some gain here.

Monday, 19 January 2015

jSMPP project update, 2.1.1 and 2.2.1 releases - fsfe | 21:29, Monday, 19 January 2015

The jSMPP project on Github stopped processing pull requests over a year ago and appeared to be needing some help.

I've recently started hosting it under and tried to merge some of the backlog of pull requests myself.

There have been new releases:

  • 2.1.1 works in any project already using 2.1.0. It introduces bug fixes only.
  • 2.2.1 introduces some new features and API changes and bigger bug fixes

The new versions are easily accessible for Maven users through the central repository service.

Apache Camel has already updated to use 2.1.1.

Thanks to all those people who have contributed to this project throughout its history.

Storage controllers for small Linux NFS networks - fsfe | 13:59, Monday, 19 January 2015

While contemplating the disk capacity upgrade for my Microserver at home, I've also been thinking about adding a proper storage controller.

Currently I just use the built-in controller in the Microserver. It is an AMD SB820M SATA controller. It is a bottleneck for the SSD IOPS.

On the disks, I prefer to use software RAID (such as md or BtrFs) and not become dependent on the metadata format of any specific RAID controller. The RAID controllers don't offer the checksumming feature that is available in BtrFs and ZFS.

The use case is NFS for a small number of workstations. NFS synchronous writes block the client while the server ensures data really goes onto the disk. This creates a performance bottleneck. It is actually slower than if clients are writing directly to their local disks through the local OS caches.

SSDs on an NFS server offer some benefit because they can complete write operations more quickly and the NFS server can then tell the client the operation is complete. The more performant solution (albeit with a slight risk of data corruption) is to use a storage controller with a non-volatile (battery-backed or flash-protected) write cache.

Many RAID controllers have non-volatile write caches. Some online discussions of BtrFs and ZFS have suggested staying away from full RAID controllers though, amongst other things, to avoid the complexities of RAID controllers adding their metadata to the drives.

This brings me to the first challenge though: are there suitable storage controllers that have a non-volatile write cache but without having other RAID features?

Or a second possibility: out of the various RAID controllers that are available, do any provide first-class JBOD support?


I looked at specs and documentation for various RAID controllers and identified some of the following challenges:

Next steps

Are there other options to look at, for example, alternatives to NFS?

If I just add in a non-RAID HBA to enable faster IO to the SSDs will this be enough to make a noticeable difference on the small number of NFS clients I'm using?

Or is it inevitable that I will have to go with one of the solutions that involves putting a vendor's volume metadata onto JBOD volumes? If I do go that way, which of the vendors' metadata formats are most likely to be recognized by free software utilities in the future if I ever connect the disk to a generic non-RAID HBA?

Thanks to all those people who provided comments about choosing drives for this type of NAS usage.

Related reading

Key Update

freedom bits | 11:42, Monday, 19 January 2015

I’m a fossil, apparently. My oldest PGP key dates back to 1997, so around the time when GnuPG just got started – and I switched to it early. Over the years I’ve been working a lot with GnuPG, which perhaps isn’t surprising. Werner Koch has been one of the co-founders of the Free Software Foundation Europe (FSFE) and so we share quite a bit of a long and interesting history together. I was always proud of the work he did – and together with Bernhard Reiter and others was doing what I could to try and support GnuPG when most people did not seem to understand how essential it truly was – and even many security experts declared proprietary encryption technology acceptable. Bernhard was also crucial to start the more than 10 year track record of Kolab development supporting GnuPG over the years. And especially the usability of GnuPG has always been something I’ve advocated for. As the now famous video by Edward Snowden demonstrated, this unfortunately continued to be an unsolved problem but hopefully will be solved “real soon now.”


In any case. I’ve been happy with my GnuPG setup for a long time. Which is why the key I’ve been using for the past 16 years looked like this:
sec# 1024D/86574ACA 1999-02-20
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve <>
uid                  Brave GNU World <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
ssb>  1024R/B7DB041C 2005-05-02
ssb>  1024R/7DF16B24 2005-05-02
ssb>  1024R/5378AB47 2005-05-02
You’ll see that I kept the actual primary key off my work machines (look for the ‘#’) and I also moved the actual sub keys onto a hardware token. Naturally a FSFE Fellowship Smart Card from the first batch ever produced.
Given that smart card is battered and bruised, but its chip is still intact with 58470 signatures and counting, the key itself is likely still intact and hasn’t been compromised for lack of having been on a networked machine. But unfortunately there is no way to extend the length of a key. And while 1024 is probably still okay today, it’s not going to last much longer. So I finally went through the motions of generating a new key:
sec#  4096R/B358917A 2015-01-11 [expires: 2020-01-10]
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Community) <>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <>
uid                  Georg C. F. Greve ( Board) <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve (GNU Project) <>
ssb>  4096R/AD394E01 2015-01-11
ssb>  4096R/B0EE38D8 2015-01-11
ssb>  4096R/1B249D9E 2015-01-11

My basic setup is still the same, and the key has been uploaded to the key servers, signed by my old key, which I have meanwhile revoked and which you should stop using. From now on please use the key
pub   4096R/B358917A 2015-01-11 [expires: 2020-01-10]
      Key fingerprint = E39A C3F5 D81C 7069 B755  4466 CD08 3CE6 B358 917A
exclusively and feel free to verify the fingerprint with me through side channels.


Not that this key has any chance to ever again make it among the top 50… but then that is a good sign in so far as it means a lot more people are using GnuPG these days. And that is definitely good news.

And in case you haven’t done so already, go and support GnuPG right now.



Are you prepared for your child’s computing education?

Computer Floss | 08:53, Monday, 19 January 2015

An overhaul of computing education is looming in schools throughout the UK.

In recent years – but at least as far back as when I was a pupil in the 1990s – education in computing and computer science within British schools had a rather narrow focus. Children learned mainly about operating computers: using word processors to write documents, whipping up spreadsheets, (maybe) building simple databases and proficiency in using an operating system (any operating system, so long as it’s Microsoft Windows).

There’s nothing wrong with this. It’s a fine goal to teach someone how to make good use of everyday applications. However, it must be admitted that this narrow focus merely teaches children how to be passive users of a computer. It gives them no grounding in the fundamentals of computing; they learn nothing about how a computer actually works.

But the upcoming overhaul of computing education will change that. Computing education will in the future focus on things like what an algorithm actually is; how to program a computer; how a program relates to an algorithm; how to detect errors in programs; how to reason about source code and find errors, and much more. It will be like physics lessons going from focusing on how to drive a car to learning the principles of the internal combustion engine.

And these changes won’t only affect college-level, or secondary school-level. It will begin from the first year of primary school.

Parents naturally want to support their children’s learning at home. With many subjects, you can do this. Many of today’s subjects are the same as when you were at school (Maths, Science, English, History etc.), so discussing their contents and helping with homework are doable. But chances are you were taught nothing about computer science at school, so how could you support your child in this subject?

One way to get a feeling is to look at the proposed syllabus. Schools in England and Wales divide all schooling into several blocks called key stages.  Each key stage covers several years of a child’s education.

Key stages 1 – 3 cover all of primary and most of secondary education. Children educated within these stages are aged between 5 and 14 years. Here’s a link to the UK Government’s breakdown of plans for teaching computing in England, but I’ve picked out some of the key parts here:

Key stage 1 (aged 5 – 7 years)

At this stage, some things your child will learn include:

  • What an algorithm is
  • What a program is and how it relates to an algorithm
  • What a digital device is
  • How to perform simple programming

Key stage 2 (aged 7 – 11 years)

  • Understand the importance of sequence, selection and repetition in algorithms
  • Understand computer networks
  • Use logical reasoning
  • Understand and detect errors in programs

Key stage 3 (aged 11 – 14 years)

  • Use and evaluate computational abstractions
  • Understand key algorithms (e.g. for sorting and searching)
  • Understand binary numbers and their use in computing
  • Understand how instructions and data are stored and executed in a computer

Some may look at that list and find that most of the items mean nothing to them. That might be discouraging if you’re a parent with a child in school. Nevertheless, it might prompt you to learn about the subject for yourself so you can share in what your son or daughter is picking up in computing lessons, but you may be unsure where to begin.

That’s one of the reasons I wrote my recently released book about computer science, Brown Dogs and Barbers. It has several intended audiences, but one of the primary ones is those people with no background in computing whatsoever who would like to learn about its fundamentals. That’s why it’s an easy-to-read book with a fun, casual style and touch of humour mixed in.

As an indicator of how helpful Brown Dogs and Barbers should be, compare the list of topics covered in the book (below) with the school syllabus. Topics that appear in both the book and the syllabus are emphasised:

  • Graph Theory
  • Set Theory
  • Sequence, selection and iteration
  • Algorithms
  • Theory of Computation
  • Turing Machine
  • Halting Problem
  • Complexity Theory
  • Binary and Hexadecimal
  • Binary Architecture
  • Von Neumann Architecture
  • Machine Coding and Assembly Language
  • High-Level Programming Languages
  • Searching and Sorting
  • Data Structures
  • Multi-tasking
  • Scheduling
  • Concurrency
  • Operating Systems
  • Networking
  • Security

I think that my book is ideal if you have school-age children and want to brush up on computer science so that you can prepare yourself to help them get to grips with this sometimes challenging but nevertheless rewarding and important subject.

It’s available to order at Smashwords or Amazon, where there are also samples to try before you buy.

Sunday, 18 January 2015

Testing the Chromium OS touchpad driver on my C720

the_unconventional's blog » English | 17:00, Sunday, 18 January 2015

Now that I’ve switched to Ubuntu, using third-party software packages should be a lot easier through the use of PPA’s. For instance, having the latest versions of LibreOffice is a great advantage.

One thing that recently came to mind, was that I hadn’t yet tested the Chromium OS touchpad driver, which is said to work a lot better for the C720′s touchpad than the default Synaptics driver. I couldn’t get it to work on Debian, but perhaps Ubuntu would be a different story.

As it turned out, it was. The Synaptics driver registered a lot of unintentional taps, which seems to be a lot less of an issue with the CMT driver, which unfortunately isn’t part of any other GNU/Linux distribution than Chromium OS.

Fortunately, Hugh Greenberg has ported the driver to “normal”, and provides a PPA for easy installation on Ubuntu systems.

Setting up the CMT driver is pretty easy. First off, add the PPA:

sudo add-apt-repository ppa:hugegreenbug/cmt

Then update the package list and install the required packages:

sudo apt-get update
sudo apt-get install libevdevc libgestures xf86-input-cmt

Get rid of the Synaptics driver:

sudo apt-get purge xserver-xorg-input-synaptics

And undo any previous tweaks to said driver:

sudo rm /etc/X11/xorg.conf.d/50-touchpad.conf

Now symlink the configuration file for your Chromebook model (Peppy for the C720):

sudo ln -s /usr/share/xf86-input-cmt/50-touchpad-cmt-peppy.conf /usr/share/X11/xorg.conf.d/50-touchpad-cmt-peppy.conf

Then create a script called, for instance, ~/ and add these lines to it:


ID=`xinput | grep cyapa | cut -f 2 | sed -e 's/id=//'`

xinput --set-int-prop $ID "Tap Drag Enable" 8 1
xinput --set-float-prop $ID "Tap Drag Delay" 0.060000
xinput --set-int-prop $ID "Pointer Sensitivity" 32 4
xinput --set-float-prop $ID "Two Finger Scroll Distance Thresh" 40.0
xinput --set-float-prop $ID "Two Finger Horizontal Close Distance Thresh" 100.0
xinput --set-float-prop $ID "Two Finger Vertical Close Distance Thresh" 85.0
xinput --set-float-prop $ID "Two Finger Pressure Diff Thresh" 80.0

Make it executable by running chmod +x ~/ and add it to your session startup applications (i.e. ~/.config/autostart/).

And finally reboot.


After using the CMT driver for a couple of days now, I can say that my quality of life has greatly improved. I use my C720 a lot, and I never really thought about how poor the touchpad actually performed with the default Synaptics driver. CMT solves pretty much all the annoyances I’ve had before, so I can understand why Google uses it for Chromium OS.

whatmaps 0.0.9

Colors of Noise - Entries tagged planetfsfe | 09:17, Sunday, 18 January 2015

I have released whatmaps 0.0.9 a tool to check which processes map shared objects of a certain package. It can integrate into apt to automatically restart services after a security upgrade.

This release fixes the integration with recent systemd (as in Debian Jessie), makes logging more consistent and eases integration into downstream distributions. It's available in Debian Sid and Jessie and will show up in Wheezy-backports soon.

This blog is flattr enabled.

Saturday, 17 January 2015

krb5-auth-dialog 3.15.4

Colors of Noise - Entries tagged planetfsfe | 09:42, Saturday, 17 January 2015

To keep up with GNOMEs schedule I've released krb5-auth-dialog 3.15.4. The changes of 3.15.1 and 3.15.4 include among updated translations, the replacement of deprecated GTK+ widgets, minor UI cleanups and bug fixes a header bar fix that makes us only use header bar buttons iff the desktop environment has them enabled:

krb5-auth-dialog with header bar krb5-auth-dialog without header bar

This makes krb5-auth-dialog better ingtegrated into other desktops again thanks to mclasen's awesome work.

This blog is flattr enabled.

Thursday, 15 January 2015

Why engineering students need to be taught free software

Nico Rikken » fsfe | 22:00, Thursday, 15 January 2015

At a power systems symposium today I met some of my previous classmates of the technical university, now in the starting phase of their engineering career. My viewpoint on the need for free software in education was once again confirmed. Whilst at the university many advanced software packages are provided to students at negligible cost, at work these same tools are hard to obtain. In practice these software packages are too expensive to be used on just a couple of cases, let alone ‘try out’ to find a use case. This basically leave the choice between misusing unsuitable packages or not taking on the task in the first place, both of which are generally undesirable.

As I have learned, and my classmates are learning as well, as an engineering professional you are in need for software with no strings attached: free software. Engineers are taught to overcome many hurdles by grasping the problem and coming up with a right approach for solving the problem at hand. Restricting the set of these possible approaches by restricting the software selection ultimately leaves unmet engineering potential, making this practice hurtful to the end-result.

As each individual use case will require the software for a different use case, software packages in general cover a larger set of features in order to target a larger market of multiple use cases, resulting in relatively overpriced software. Apart from the cost of the software package, there are the costs of maintaining yet another software install and having to deal with recurring costs like license fees per year or version. A way to diminish this barrier is by offering subscriptions to hosted solutions, as many software vendors have started doing. Whilst this reduces the upfront cost, there is more to free software than cost alone.

The freedom to modify the code enables integrating the software package in a solution like an automated tool chain. Better still by modifying the underlying code or even working with upstream development engineers can customize and improve each tool of your tool set. Since it is free software no party will be able to take it from you, and you are able to fork the software if you disagree with the direction development is heading in. In this way an engineer is able to achieve far greater independence.

Whilst it seems to be a good idea to teach students to use professional software pckages used in the workplace, this approach presumes that those software packages will be available for students at the job after graduation. If this isn’t the case, these engineers experience unmet potential. By teaching free software, all students are able to exercise their potential, although some students will experience a non-free software package on the job. If the latter is the case, this presumably is because of specific features, which wouldn’t have been taught at university in the first place.

Furthermore students need to be taught to evaluate software offerings in order to select a package based on the task at hand, rather than to have a package selected for them which is often misused or underused. And free software should be taught just like academics are taught, since both value sharing information and checking the work of others.

Disk expansion - fsfe | 20:29, Thursday, 15 January 2015

A persistent problem that I encounter with hard disks is the capacity limit. If only hard disks could expand like the Tardis.

My current setup at home involves a HP Microserver. It has four drive bays carrying two SSDs (for home directories) and two Western Digital RE4 2TB drives for bulk data storage (photos, source tarballs and other things that don't change often). Each pair of drives is mirrored. I chose the RE4 because I use RAID1 and they offer good performance and error recovery control which is useful in any RAID scenario.

When I put in the 2TB drives, I created a 1TB partition on each for Linux md RAID1 and another 1TB partition on each for BtrFs.

Later I added the SSDs and I chose BtrFs again as it had been working well for me.

Where to from here?

Since getting a 36 megapixel DSLR that produces 100MB raw images and 20MB JPEGs I've been filling up that 2TB faster than I could have ever imagined.

I've also noticed that vendors are offering much bigger NAS and archive disks so I'm tempted to upgrade.

First I looked at the Seagate Archive 8TB drives. 2TB bigger than the nearest competition. Discussion on Reddit suggests they don't have Error Recovery Control / TLER however and that leaves me feeling they are not the right solution for me.

Then I had a look at WD Red. Slightly less performant than the RE4 drives I run now, but with the possibility of 6TB per drive and a little cheaper. Apparently they have TLER though, just like the RE4 and other enterprise drives.

Will 6 or 8TB create new problems?

This all leaves me scratching my head and wondering about a couple of things though:

  • Will I run into trouble with the firmware in my HP Microserver if I try to use such a big disk?
  • Should I run the whole thing with BtrFs and how well will it work at this scale?
  • Should I avoid the WD Red and stick with RE4 or similar drives from Seagate or elsehwere?

If anybody can share any feedback it would be really welcome.

Wednesday, 14 January 2015

Improving Ubuntu privacy: step two

the_unconventional's blog » English | 10:00, Wednesday, 14 January 2015

You may have read my earlier post about installing Ubuntu without proprietary software. Using that script, you’ll get a minimal Unity environment without all the bloatware and the online stuff.

Still, there are some default Ubuntu and GNOME settings better off changed to improve system privacy and to reduce clutter. I’ll try to list as many of those changes as possible.

The majority of things can be done with dconf; either with the GUI dconf-editor or using gsettings set. I’ll describe the latter, because it’s a lot easier to explain.


Panel settings

Disable the restart menu option (useless because of the pop-up):

gsettings set com.canonical.indicator.session suppress-restart-menuitem true

Disable the user list in the session menu:

gsettings set com.canonical.indicator.session user-show-menu false

Hide the keyboard indicator:

gsettings set com.canonical.indicator.keyboard visible false


Launcher settings

Disable the HUD history storage:

gsettings set com.canonical.indicator.appmenu.hud store-usage-data false

Disable all scopes, use the launcher as an application drawer only:

gsettings set com.canonical.Unity.Dash scopes "['home.scope', 'applications.scope', 'files.scope']"
gsettings set com.canonical.Unity.Lenses always-search "['applications.scope', 'files.scope']"
gsettings set com.canonical.Unity.Lenses home-lens-default-view "['applications.scope', 'files.scope']"
gsettings set com.canonical.Unity.Lenses home-lens-priority "['applications.scope', 'files.scope']"
gsettings set com.canonical.Unity.Lenses remote-content-search none

Disable application suggestions (in case you have USC installed):

gsettings set com.canonical.Unity.ApplicationsLens display-available-apps false

Disable locate to speed up searches (files aren’t logged anyway):

gsettings set com.canonical.Unity.FilesLens use-locate false

Nautilus settings

Disable automounting:

gsettings set automount false
gsettings set automount-open false
gsettings set autorun-never true

Disable recently used applications and files storage:

gsettings set org.gnome.desktop.privacy remember-app-usage false
gsettings set org.gnome.desktop.privacy remember-recent-files false

Unfortunately, not every application honors this setting, so in case you really want to avoid recently used file storage, it’s best to change the file into a directory, making it impossible to write to:

rm ~/.local/share/recently-used.xbel
mkdir ~/.local/share/recently-used.xbel

Tuesday, 13 January 2015

Silent data loss exposed - fsfe | 20:06, Tuesday, 13 January 2015

I was moving a large number of image files around and decided to compare checksums after putting them in their new home.

Ouf of several thousand files, about 80GB of data, I found that seventeen of them had a checksum mismatch.

Running md5sum manually on each of those was showing a correct checksum, well, up until the sixth file and then I found this:

$ md5sum DSC_2624.NEF
94fc8d3cdea3b0f3479fa255f7634b5b  DSC_2624.NEF
$ md5sum DSC_2624.NEF
25cf4469f44ae5e5d6a13c8e2fb220bf  DSC_2624.NEF
$ md5sum DSC_2624.NEF
03a68230b2c6d29a9888d2358ed8e225  DSC_2624.NEF

Yes, each time I run md5sum on the same file it gives a different result. Out of the seventeen files, I found one other displaying the same problem and the others gave correct checksums when I ran md5sum manually. Definitely not a healthy disk, or is it?

This is the reason why checksumming filesystems like Btrfs are so important.

There are no errors or warnings in the logs on the system with this disk. Silent data loss at its best.

Is the disk to blame though?

It may be tempting to think this is a disk fault, most people have seen faulty disks at some time or another. In the old days you could often hear them too. There is another possible explanation though: memory corruption. The data read from disk is normally cached in RAM and if the RAM is corrupt, the cache would return bad data.

I dropped the read cache:

# echo 3 > /proc/sys/vm/drop_caches 

and tried md5sum again and observed the file checksum is now correct.

It would appear the md5sum command had been operating on data in the page cache and the root cause of the problem is memory corruption. Time to run a memory test and then replace the RAM in the machine.

Monday, 12 January 2015

Lumicall: big steps forward, help needed - fsfe | 20:02, Monday, 12 January 2015

I've recently made more updates to Lumicall, the free, open source and secure alternative to Viber and Skype.

Here are some of the highlights:

  • The dialing popup is now optional, so if you want to call your friends with Lumicall / SIP but they don't want to see the popup when making calls themselves, you can disable the popup on their phone.
  • The dialer popup now shows results asynchronously so you can dial more quickly
  • SIP SIMPLE messaging is now supported, Lumicall is now taking on WhatsApp
  • Various bugs in the preferences/settings have been fixed and adding SIP accounts is now easier
  • Dialing with a TURN relay is now much more reliable
  • It is now possible to redial SIP calls in the call history without seeing the nasty Android bug / popup telling you that you don't have Internet calling configured

F-Droid not updated yet

F-Droid is not yet carrying the latest version. The F-Droid team may need assistance as they appear to be reviewing a lot of the third-party dependencies used by apps they distribute to make sure the full stack is 100% free software. If people want to continue having the option to get Lumicall and other free software through F-Droid instead of Google Play then helping the F-Droid community is the number one way you can help Lumicall.

Other ways you can help, even without coding

You don't have to be a developer to help with Lumicall.

Taking on Viber, Skype and now WhatsApp as well may not sound easy. It isn't. Your help could make the difference though.

Here are some of the things that need assistance:

  • Helping to get it on Wikipedia, they keep deleting the page while happily hosting pages about similar products like Sipdroid and CSipSimple
  • Helping get the latest dependencies and Lumicall version into F-Droid
  • UI design ideas
  • Web site assistance
  • Documentation and screenshots, e.g. for use with Asterisk and FreeSWITCH and various SIP proxies
  • Translation
  • Messaging: to be the default SMS app on an Android device, Lumicall would need a full messaging UI and the ability to replace all functions of the default SMS app. Can anybody identify any other free software that does this and is modular enough to share the relevant code with Lumicall?
  • ZRTP: can you help improve the ZRTP stack used by Lumicall?

Try SIP SIMPLE messaging

When composing a message to a Lumicall user, the SIP address is written sip:number E.g. if the number is +442071234567 then the SIP address to use when calling or composing a message is sip:+442071234567

Debian Developers should be able to interact with Lumicall users from using the SIP over WebSocket messaging.

Getting the latest Lumicall

It is available from:

Nemein has a new home

Henri Bergius | 00:00, Monday, 12 January 2015

When I flew to Tenerife to sail across the Atlantic in late November, there was excitement in the air. Nemein — the software company I started in 2001 with Henri Hovi and Johannes Hentunen, and left later to build an AI-driven web publishing tool — was about to be sold.

Today, I'm happy to tell that Nemein has been acquired by Anders Innovations, a fast-growing software company.

Nemein joins Anders Innovations

I had a videoconference this morning with Nemein's and Anders Inno's CEOs Lauri and Tomi, and it seems the team Nemein has indeed found a good home.

Lauri and Tomi

Technologically, the companies are a good fit. Both companies have a strong emphasis on building business systems on top of the Django framework. To this mix, Nemein will also bring its long background with Midgard CMS and mobile ecosystems like MeeGo and its successor, Sailfish.

I wish the whole team at Anders Innovation the best, and hope they will be able to continue functioning as a champion of the decoupled content management idea.

Nemein has also been a valuable contributor to the Flowhub ecosystem, which I hope will continue.

For those interested in the background of Nemein, I wrote a longish story of the company's first ten years back in 2011. I also promise to write about The Grid soon!

Sunday, 11 January 2015

Accepting Free Software in education is not about technology. It’s about human rights.

the_unconventional's blog » English | 21:30, Sunday, 11 January 2015

A couple of weeks ago, I read an article on the BBC site about the way gay people are being treated in Russia. It contained an interview with Vitaly Milonov; presumably one of Russia’s biggest homophobes.

Let me be very clear about this: I do not agree with his insane statements about homosexuality at all. But still, he seems to be more level-headed about the rights people have in their own homes than pretty much any Dutch school is.

Milonov is the kind of guy that got rid of his Apple smartphone after Tim Cook announced that he was gay. And while I certainly applaud anyone who stops using Apple’s products, the sexual orientation of its CEO is probably the dumbest reason for doing so.

While ranting on about how homosexuality is a sin and all gay people should die of AIDS, Milonov did, however, say that even gay people – whom he hates so much – should be allowed to live their lives as they wish in their own homes.

A Russian homophobe

Vitaly Milonov: He really hates gay people, yet acknowledges their private lives.

Let’s compare this – and by that I mean the acceptance of freedom in your own home – with the way Free Software users are being treated by Dutch schools.

When I applied for colleges back in the day, I wrote a letter to all of them asking if it would be possible for me to go there without being forced to personally accept contracts with software companies and without being forced to use devices in my home of which I could not research how they work.

My views on technology, for that matter, are very simple: if you are not allowed to analyze the inner workings of a device in your home environment, it provides an inherent risk to the safety of your family. After all, it’s in your house, and you don’t know what it’s doing.

Each and every university and college I applied to, rejected me. All of them claiming that I had to buy computers with proprietary software, that I had to use them at home, that I had to accept contracts with foreign software companies, and that I had to agree with them storing my personal data on servers of which nobody really knew how secure they were. Not even the companies maintaining them.


If you do not accept the contract that forbids you to research the devices in your home, schools do not accept your presence.

I don’t think anyone would disagree with me saying that people should be free to choose the devices they want or do not want in their own homes.

I also don’t think anyone would disagree with me saying that it’s not normal to force somebody to sign a contract when that person repeatedly says that he or she doesn’t agree with the terms.
In fact, I doubt that any lawyer in the world would ever advise their clients to sign a contract when they don’t explicitly and completely agree with its contents.

Moreover, I doubt that anyone would disagree with me saying that it’s especially inappropriate when a government-funded, (semi-)public institution forces civilians to do these things against their will.

A list of quotes

Some of the many excuses universities came up with to reject my applications.

Still, this is what happens at Dutch schools on a daily basis. Computers have become – whether we like it or not – the basis of pretty much every aspect of society. The software we run on them is therefore the basis of nearly everything in life. If we want to claim that people in our country are “free”, they shouldn’t be bullied into changing the way they want to live or the values they believe in. Most certainly not at home.

By forcing Free Software users to buy computers with proprietary software even though they don’t want to; to accept software licenses even though they don’t agree with them; and to agree with usage terms even though they don’t accept them, schools are doing exactly that. You are expected to lead your personal life the way they tell you to, no matter how you feel about that.

The price you have to pay for education in the Netherlands is the sovereignty over your own home. You can’t go to school unless you let them dictate your personal computer use. Because if you don’t use the devices they tell you to, sign the contracts they want you to, and accept the privacy risks they expect you to, you’re not welcome anywhere.

Compare that harsh reality to the words of the “ringleader” of Russian homophobes:

“They can do whatever they want in their homes.”

The Kremlin

The Kremlin, of course, is totally not gay at all. But they do respect the homes of gay people -and- they allow Free Software users in their schools.

Now don’t get me wrong. Of course I’m not implying that the general quality of life for a Free Software user in the Netherlands is worse than that of a gay person in Russia. But I can’t help but notice that a gay-bashing homophobe like Milonov has a clearer grasp of the sanctity of a personal home environment than any school in the Netherlands does.
In fact, he seems to have a bigger sense of respect for the private lives of a group of people he hates than Dutch schools do for the people who pay them.

After all, he at least grants gay people the right to be who they are when they’re at home, while I am yet to find the first Dutch school accepting me and other Free Software users like me to do the same.

A Wikipedia screenshot

Article 8 of the European Convention on Human Rights also states that "everyone has the right to respect for his private and family life, his home and his correspondence".

To this very day, I can’t understand why the Dutch educational system as a whole seems to find this situation acceptable, as no-one seems to be doing anything to improve things.
Working with devices that nobody is allowed to know anything about is a clear condition for getting any kind of education in this country, and anyone who doesn’t accept that is pretty much considered unworthy of securing their future by making something of themselves.

Essentially, the ban on everyone who doesn’t accept proprietary software licenses and doesn’t want to take unnecessary privacy risks is the textbook definition of oppression. A group of people is not accepted, and never will be accepted because they’re kept from ever reaching a position in life where they would be able to change things.

Ironically, if I’d live in Russia, I probably would have graduated from a university by now, because Free Software use is accepted and considered “normal” there.
In the mean time, the Dutch (rightfully) criticize Russia for their position on gay rights, oblivious of the fact that “we” happily discriminate and oppress another group of people just as easily. Pointing the finger towards others is easier after all.

Ubuntu laptops

While even accepted in the US, home of Microsoft and Apple, using GNU/Linux at Dutch schools can almost be considered illegal.

Ending the BBC article was a statement about how young Russian homosexuals are increasingly looking for advice to emigrate from their home country, because they believe that fighting for their rights is futile.
Albeit for other reasons, I personally feel the same. Of course Free Software users aren’t threatened by physical violence in the Netherlands, but their chances of securing a decent future are, in fact, threatened very much. And after many years of campaigning for a better future for the next generation, I still have no reason to believe there will be any improvements any time soon.

While other European countries like Germany, France, Portugal, Spain, and Iceland have started to accept Free Software many years ago, and even “bad” countries like Russia and China doing the same, it’s depressing to see the Netherlands endlessly lagging behind. All while acting morally superior towards the rest of the world, of course.

Free Software in Education News – December

Being Fellow #952 of FSFE » English | 20:52, Sunday, 11 January 2015

Here’s what we collected in December. If you come accross anything that might be worth mentioning in this series, please drop me a note, dump it in this pad or drop it on the edu-eu mailinglist!

FSFE Edu-Team activities

Distro news


Other news

Future events

  • Jan 21-24, 2015: BettShow, London
  • Jan 30, edu-team meeting, Brussles

Thanks to all contributors!

flattr this!

Saturday, 10 January 2015

Disabling a touchpad when the screen lid is closed

the_unconventional's blog » English | 10:00, Saturday, 10 January 2015

My mom’s laptop has a weird hardware issue. Whenever the lid is closed and the machine is carried around, the touchpad seems to register pressure from the LCD panel as finger presses, often launching applications without her knowing.

In order to stop that from happening, I wanted to come up with a way to disable the touchpad when the lid is closed and re-enable it once it’s opened. Although that might seem complicated to accomplish, it really wasn’t.

There’s not a lot of things needed: acpi-support and xinput will do. And another machine with SSH access will really come in handy as well.

This was tested on a machine running Ubuntu 14.04, but it will probably work for most distributions.


First things first; make sure the required packages are installed by running the following command:

sudo apt-get install acpi-support xinput

Then, find out your touchpad’s device number:

xinput --list

⎡ Virtual core pointer                   id=2  [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer         id=4  [slave  pointer  (2)]
⎜   ↳ FSPPS/2 Sentelic FingerSensingPad  id=11 [slave  pointer  (2)]
⎣ Virtual core keyboard                  id=3  [master keyboard (2)]

As you can see, it’s device 11. This may very well be different in your case though.

Now we’ll have to find out if the lid switch works. (This is easier from another machine using SSH.)

With the lid opened, run:

cat /proc/acpi/button/lid/LID/state state:      open

Close the lid, and repeat:

cat /proc/acpi/button/lid/LID/state state:      closed

This argument has to be translated to an xinput parameter, which is simply 0 or 1 for enabling/disabling the touchpad. So we use grep:

grep -q closed /proc/acpi/button/lid/LID/state
echo $?

Close the lid, and repeat:

grep -q closed /proc/acpi/button/lid/LID/state
echo $?

So, with the lid opened, it generates 1, and with the lid closed, it generates 0. Exactly what we need to tell xinput to disable the touchpad.

In order to disable the touchpad, we have to run the following command (in case it’s device 11, of course):

xinput set-int-prop 11 "Device Enabled" 8 0

To re-enable it, we have to run this:

xinput set-int-prop 11 "Device Enabled" 8 1

This can easily be automated using the output from the ACPI lid state:

grep -q closed /proc/acpi/button/lid/LID/state
xinput set-int-prop 11 "Device Enabled" 8 $?

With a closed lid, it will echo 0 to disable the touchpad, and when the lid opens, it will echo 1 to enable the touchpad.


All that’s left to do now is creating an ACPI event to automatically run these commands upon opening and closing the lid.

First, run sudo nano /etc/acpi/events/lm_lid and paste this into it:


Then, create the actual script. Keep in mind that the ACPI daemon runs as root, so in order for it to access the user’s X session, we’ll have to use some sorcery regarding .Xauthority files, which I admittedly had to look up as well.

Run sudo nano /etc/acpi/ and paste this into it:

export XAUTHORITY=`ls -1 /home/*/.Xauthority | head -n 1`
export DISPLAY=":`ls -1 /tmp/.X11-unix/ | sed -e s/^X//g | head -n 1`"

grep -q closed /proc/acpi/button/lid/LID/state
xinput set-int-prop 11 "Device Enabled" 8 $?

Finally, make the script executable by running sudo chmod +x /etc/acpi/ and restart the ACPI daemon:

sudo service acpid restart 


Now, using an SSH session, you can check whether the script works. Log in to the machine and try this:

export DISPLAY=:0.0

xinput --watch-props 11

When the lid is opened, you should see something like this:

Device 'FSPPS/2 Sentelic FingerSensingPad':
    Device Enabled (135):    1

When you close the lid, this should happen:

Property 'Device Enabled' changed.
    Device Enabled (135):    0

Friday, 09 January 2015

Visual languages: functional programming in the era of jab and smoosh

Jelle Hermsen » English | 18:22, Friday, 09 January 2015

Today I gave a talk on visual programming languages at NL-FP day 2015. It was the first FP-day I visited and it felt a bit like coming home for me, I already look forward to next year when it’s held on January 8th in Utrecht!

Here you find the text of my talk:

I will skip my introduction on why in fifty years our modern day practice of
computer programming by typing may seem as old-fashioned as using keypunch

More than 50 years ago, at MIT, Ivan Sutherland developed Sketchpad:
the first program with a graphical user interface. He used the experimental
transistor-based TX-2 computer, which had a nine inch CRT screen and a light
pen. Sutherland used this light pen to allow users to draw directly on
the display, something which had not been done before. He created the necessary
software to allow you to draw primitive objects that can later be recalled,
rotated, scaled and moved. These drawings could be saved on magnetic tape, so
you could edit them at a later time. Sketchpad was truly ground breaking
because it allowed you to directly interact with the system, without having to
type, and it also allowed non-experts to use the a computers.

Sketchpad was used for computer aided design, but you could also use it to
create programs by drawing flow charts. You could draw boxes, containing the
statements, transferring the results along one way or another, allowing the
user to program the computer without first having to transcribe everything onto
punch cards or paper tape.

Ivan Sutherlands work was very important for the future of GUI’s, Computer
Aided Design and Visual Languages. In 1988 he received the Turing Award for
everything he did for computer science.

In the years after his thesis on Sketchpad the work on Visual Languages was
continued by many others. His older brother Bert, for example, wrote a
thesis on a new pictorial language. Influenced by the work on Sketchpad he
created a system on the TX-2 in which the user could draw procedures using
symbols that depict operations. It featured a system for debugging and in his
thesis he elaborately described the flow of data inside these procedures,
making his system one of the first graphical dataflow programming frameworks,
an approach using directed graphs which would be used often hereafter.

Bert Sutherland mentions in his thesis that the specification of graphical
procedures has been a neglected field, and most accomplishments have been in
the field of graphical data. More research was conducted the following years,
but the development in visual languages was hampered by the fact that there
wasn’t a widely used pointing device. This changed when the Macintosh brought
about the widespread adoption of the mouse in the mid eighties. This also
caused the first commercial VPL’s like Prograph to appear, which did
not target computer scientists, or educational purposes, but were meant to make
programming easier, by supplying the user with high level building blocks.

But still, visual languages were almost non existent in the landscape of
programming. There was, however, a certain optimism that this would soon
change. In a 1990 paper in the Journal of Visual Languages and Computing
titled “Exploring the general purpose alternative“, the authors Glinert,
Kopachet en Mcintyre said the following: “The goal is nothing less than to
expand the programmer’s workspace to a multi-modal, animated,   3-D
environment. We predict that this objective will in fact be attained before the
turn of the century.

Obviously this hasn’t happened. But what did happen in the years after this
paper? We got some great VPLs like Scratch and Alice. Truly
magnificent tools if you want to teach your child to program without pre-mature
exposure to stuff like object orientation, pointers or monads. If you
want to create your own audio effects pipeline, or a funky 80′s revival style
synthesizer, or a midi step sequencer you can save yourself a whole lot of
frustration and wire up all your needs in Pure Data or Max/MSP, which is (trust
) way better than rolling you rown in <fill in your favorite functional
programming language>
, even if you pull in the best available libraries from
hackage or the likes.

There are many, many visual domain specific languages. You want to
process your lab data? You want to control a robot? You
want to build a visual effect chain for your newest Youtube animation?
You want a language that keeps itself simple and stupid enough so even your
livelong <stuck in middle management, not having a clue> boss can use it?

These domain specific needs can be checked without a hitch, but when you look
at the popularity of general purpose VPLs they a far cry from even Visual
Basic? Sure there are some broad-purpose VPL’s. Microsoft has made one,
MIT has one. There’s an open source tool called “Programming without
“, and there are others, but none of these are considered to be a serious
programming environments that could be used by a professional. General purpose
VPLs same to be stuck inside specific domains, research and education.

And there are a couple of good reasons for this. First of all, VPL’s, albeit
being developed early, came late to the real party after the first waves of
personal computers hit the market. It took quite a while for computers to be
outfitted with a mouse. This left little room for the graphical alternatives.
But there were also serious issues with visual languages itself.

Take for example scalability. A hello world program might look nice and dandy
, but when you want to make a complex program you will need to be able
to tidy up your act by putting everything neatly into separate parts. In
imperative and functional languages we have pretty much fixed the problem of
scalability, by putting our code in different modules, or classes, or using
namespaces and packages…etc. In visual languages this is more difficult to
achieve. The tendrils of your system are out in the open, and more in your face
than in a written language. If you don’t mitigate this then the cross-program
dependencies you have rear their ugly heads and turn your program into
spaghetti, which visualised looks pretty gruesome.

Then there is the problem of expressiveness. With program languages there often
seems to be a trade-off between ease of use and expressiveness. The more dummy
proof a language, the more pain and sweat it will take to get some serious work
done. Anybody who has done stuff in the Commodore 64 supplied Basic , or
in Java before it supported anonymous classes and generics should know what I
mean. When you look at the many available VPL’s then you will notice that most
of them have settled for ease of use, which is of course fine of you’re into
creating toy projects or sticking to one domain, but in this specific case of
wanting VPL’s to take over the world and converting all programmers in pinching
swiping gurus of the touch screen, this simply won’t suffice.

The last obvious problem has to do with culture. Programmer culture tends to
move slow. It took Java 20 years to get lambda expressions. Something
which has been a great idea ever since Alonzo Church introduced the
Lambda-calculus in the 1930’s and proven to work extremely well in practice
since the implementation of the first Lisp in the 50’s. So advancements in
programming languages propagate slowly, we tend to stick to old languages for a
long time. Sometimes there’s good reason for this, when we prefer stability
above everything else. I guess that’s the reason why there are still poor sods
out there maintaining decades old Fortran codebases.

But besides the languages there’s also a certain conservatism surrounding the
tools with which we write our code. I for example am an avid user of the
VIM-editor and when I’m working in my tiling window manager with terminals
plastering my screen, I almost never need to reach for the mouse. Geeks like me
will need a very good reason to actually pick up that cabled clicky thingy that
lies next to my keyboard when I can instead keep the fingers on the keys.

But still. I think there’s great merit in visual languages. The cultural issues
I mentioned can be solved with time, the other issues by adopting the right
constructs from computer science research and functional programming.
Scalability can be solved, and has been solved, by choosing correct ways to
create modules. Expressiveness can be added by taking cues from homoiconic
languages like lisp that transform beautifully to the graphical space, and
adding higher order functions, purity, laziness.

We, humanoids, are visual animals. To make sense of how an algorithm
works I visualize it. When I try to make sense of a large code project I use
it’s file and directory structure, modules and packages as a visual frame of
reference. If we visualize a project correctly, abstracting away the details
when we don’t need them, and providing an easy way to dive into the nitty
gritty bits when we want, we can find ourselves in a place where it can be
easier to reason about our code and more importantly explain this reasoning to
others. So the scalability issue could in fact be turned upside down and
changed into a strength if we take the right interface designing path of
modular touchy swipey goodness. Something we might need actual interface
designers for. And like they do we would need to break out of the computer
screen and look at the smelly beast sipping red bull in front of it. We would
need to find out which different mental models programmers use, and how we can
transpose those to visual elements. And we will need to figure out the
cognitive dimensions of those visual elements so we can trace out a path for

I can list many reasons for trying to create a new VPL that rocks the world,
but one pet peeve of mine are compile-time errors and especially  syntax
errors. Aren’t these the most stupid, time wasting things ever. So I’m typing
all this code and after my IDE doesn’t show any of those curly red thingies I
can safely press Compile, only to be confronted with a load of messages about
all the obvious ways in which I suck, and my program is incorrect. And this is
a totally solved issue, I mean, compile time problems are mostly low hanging
fruit. In the case of Haskell with its nice type system, also to somewhat
higher branches, but still. Why can’t we eliminate these completely? “We can”,
you might say, because I use, Eclipse, or IntelliJ and that IDE happens to be
very smart. WRONG. They stink. Why is it even possible to write something that
is so blatantly wrong, low hanging compiler or lint-checker, fruit? It feels a
bit like driving in a car in which I will have to control the cooling system
manually and when I start the ignition make sure I don’t flood the engine by
quietly murmuring obscenities.

It seems that while the abstraction level of programming languages have
increased, the errors themselves are still stuck in the elevator.
Error-catching wastes more time than ever. Many programmers of dynamic
scripting languages see no problem in actually going through the mind
numbing process of first executing their program in order to check for errors,
and I’m not talking about highly parallel programs that can’t be easily
debugged in another way. Madness!

Anyway, I think that we can create a new visual programming language, by
combining more than fifty years of research with the lessons learned in
functional programming. I have foolishly made a start:

Here’s what we do: We start with a typical boxes and arrows, directed
graph, flowchart kind of language. We add the ability to zoom and hide details
when using a touch screen, and we support laziness, currying, higher order
functions and a module system. We make functions pure by default and add
necessary, but evil side effects by surrounding them with a visual code smell.
Something like monads, but minus the unnecessary but frequent occurring fears
of category theory.

Taking a cue from Lisp it’s a great idea to have one basic data type and using
that for both code and data. One problem in the written world is that
homoiconic languages tend to look like a too many parenthesis in a love
shack, but work pretty well when visualized as graphs.

We can switch between different list representations at will  , or draw
new ones if we like. We can also create symbolic expressions by pointing lists
to functions with arrows. We can make a bigger function with nested functions
by drawing a box around them. We can drag lists around and create
immutable copies if this helps in our evil schemes. You can choose to direct a
partial list for further processing, which helps create  a nice head or
car function.

We can add labels to our functions, or larger encompassing structures.
But since we’re working visually, we could do even better.

I will show you how we can implement a map function.

We need a function and a list for our map function. And it’s nice if the system automatically adds color to them so we can keep them apart.

We can make immutable copies and change the list representation to get the granularity we need.

We’ll also make a copy for the function that you supply to MAP.

We use recursion to call map on the provided function and the tail of the list

We apply the head of the list to the provided function

We concatenate the results

Now, what happens when we give Map an empty list? We will need to check for this, by adding an if statement.

The if statements returns an empty list when you try to apply map to an empty list, otherwise it will return the result of the concatonation.

What if we want to add a side-effect to our map function?

We are launching the missiles as a side-effect and this makes the entire function impure, so our red background propogates to the entire function.

Here you have it, we have made a higher order recursive function that doesn’t like
like complete rubbish. You can travel along the recursive steps if you like by
zooming into the nested map function, enjoying the droste effect, or strange

Alright, so what now? I’m still figuring out the best way of adding types to
this visual notation. If I want to protect the programmer from silly errors
while he’s trying to make them, I will need a strong type system, and
preferably one that is light and stays out of your way if you want to by using
type inference.

I will make an informal description of this language and follow it up by
describing everything formally in typed lambda calculus. After this I will make
a first browser based implementation, using a simple graph reduction approach.

I will post updates on the progress on on this blog.

While I show you a possible implementaton of quicksort I would like to ask you
to give visual languages a chance in the future. When you find yourself needing
a DSL, or like me you’re toying around with language design, maybe you can pick
a visual language instead of a traditional one. I know it’s a bit harder,
because you will need to make do without lexer and parser generators , but I’m
confident that slowly but surely we are moving in this direction. At first we
might see smarter IDE’s that add graphics to tackle the complexity of large
projects, but I think the touchscreen revolution isn’t stopping, and one day we
might find ourselves developing software by touching, swiping, jabbing and

10 reasons to migrate to MariaDB (if still using MySQL)

Seravo | 14:00, Friday, 09 January 2015

The original MySQL was created by a Finnish/Swedish company, MySQL AB, founded by David Axmark, Allan Larsson and Michael “Monty” Widenius. The first version of MySQL appeared in 1995. It was initially created for personal usage but in a few years evolved into a enterprise grade database and it became the worlds most popular open source relational database software – and it still is. In January 2008, Sun Microsystems bought MySQL for $1 billion. Soon after, Oracle acquired all of Sun Microsystems after getting approval from the European Commission in late 2009, which initially stopped the transaction due to concerns that such a merger would harm the database markets as MySQL was the main competitor of Oracle’s database product.

MariaDB logoOut of distrust in Oracle stewardship of MySQL, the original developers of MySQL forked it and created MariaDB in 2009. As time passed, MariaDB replaced MySQL in many places and everybody reading this article should consider it too.

At Seravo, we migrated all of our own databases from MySQL to MariaDB in late 2013 and during 2014 we also migrated our customer’s systems to use MariaDB.

We recommend everybody still using MySQL in 2015 to migrate to MariaDB for the following reasons:

1) MariaDB development is more open and vibrant

Unlike many other open source projects Oracle inherited from the Sun acquisition, Oracle does indeed still develop MySQL and to our knowledge they have even hired new competent developers after most of the original developers resigned. The next major release MySQL 5.7 will have significant improvement over MySQL 5.6. However, the commit log of 5.7 shows that all contributors are Most commit messages reference issue numbers that are only in an internal tracker at Oracle and thus not open for public discussion. There are no new commits in the latest 3 months because Oracle seems to update the public code repository only in big batches post-release. This does not strike as a development effort that would benefit from the public feedback loop and the Linus law of “given enough eyes all bugs are shallow”.

MariaDB on the other hand is developed fully in the open: all development decisions can be reviewed and debated on a public mailing list of in the public bug tracker. Contributing to MariaDB with patches is easy and patch flow is transparent in the fully public and up-to-date code repository. The Github statistics for MySQL 5.7 show 24 contributors while the equivalent figure for MariaDB 10.1 is 44 contributors. But it is not just a question of code contributors – in our experience MariaDB seems more active also in documentation efforts, distribution packaging and other related things that are needed in day-to-day database administration.

Because of the big momentum MySQL has had, there is still a lot of community around it but there is a clear trend that most new activities in the open source world revolve around MariaDB.

As Linux distributions play a major role in software delivery, testing and quality assurance, the fact that the both RHEL 7 and SLES 12 ship with MariaDB instead of MySQL increases the likelihood that MariaDB is going to be better maintained both upstream and downstream in years to come.

2) Quicker and more transparent security releases

Oracle only has a policy to make security releases (and related announcements) every three months for all of their products. MySQL however has a new release every two months. Sometimes this leads situations where security upgrades and security information are not synced. Also the MySQL release notes do not list all the CVE identifiers the releases fix. Many have complained that the actual security announcements are very vague and do not identify the actual issues or the commits that fixed them, which makes it impossible to do backporting and patch management for those administrators that cannot always simply upgrade to the latest Oracle MySQL release.

MariaDB however follows good industry standards by releasing security announcements and upgrades at the same time and handling the pre-secrecy and post-transparency in a proper way. MariaDB release notes also list the CVE identifiers pedantically and they even seem to update the release notes afterwards if new CVE identifiers are created about issues that MariaDB has already released fixes for.

3) More cutting edge features

MySQL 5.7 is looking promising and it has some cool new features like GIS support. However, MariaDB has had much more new features in recent years and they are released earlier, and in most cases those features seem to go through a more extensive review before release. Therefore we at Seravo trust MariaDB to deliver us the best features and least bugs.

For example GIS features were introduced already in the 5.3 series of MariaDB, which makes storing coordinates and querying location data easy. Dynamic column support (MariaDB only) is interesting because it allows for NoSQL type functionality, and thus one single database interface can provide both SQL and “not only SQL” for diverse software project needs.

4) More storage engines

MariaDB in particular excels as the amount of storage engines and other plugins it ships with: Connect and Cassandra storage engines for NoSQL backends or rolling migrations from legacy databases, Spider for sharding, TokuDB with fractal indexes etc. These plugins are available for MySQL as well via 3rd parties, but in MariaDB they are part of the official release, which guarantees that the plugins are well integrated and easy to use.

5) Better performance

MariaDB claims it has a much improved query optimizer and many other performance related improvements. Certain benchmarks show that MariaDB is radically faster than MySQL. Benchmarks don’t however always directly translate to real life situations. For example when we at Seravo migrated from MySQL to MariaDB, we saw moderate 3-5 % performance improvements in our real-life scenarios. Still, when it all adds up, 5% is relevant in particular for web server backends, where every millisecond counts. Faster is always better, even if it is just a bit faster.

6) Galera active-active master clustering

Galera is a new kind of clustering engine which, unlike traditional MySQL master-slave replication, provides master-master replication and thus enables a new kind of scalability architecture for MySQL/MariaDB. Despite that Galera development already started in 2007, it has never been a part of the official Oracle MySQL version while both Percona and MariaDB flavors have shipped a Galera based cluster version for years.

Galera support will be even better in MariaDB 10.1, as it will be included in the main version (and not anymore in a separate cluster version) and enabling Galera clustering is just a matter of activating the correct configuration parameters in any MariaDB server installation.

7) Oracle stewardship is uncertain

Many people have expressed distrust in Oracle’s true motivations and interest in keeping MySQL alive. As explained in point 1, Oracle wasn’t initially allowed to acquire Sun Microsystems, which owned MySQL, due to the EU competition legislation. MySQL was the biggest competitor to Oracle’s original database. The European Commission however approved the deal after Oracle published an official promise to keep MySQL alive and competitive. That document included an expiry date, December 14th 2014, which has now passed. One can only guess what the Oracle upper management has in mind for the future of MySQL.

Some may argue that in recent years, Oracle has already weakened MySQL in subtle ways. Maybe, but in Oracle’s defense, it should be noted that MySQL activities have been much more successful than for example OpenOffice or Hudson, which both very quickly forked into LibreOffice and Jenkins with such a momentum, that the original projects dried up in less than a year.

However, given the choice between Oracle and a true open source project, the decision should not be hard for anybody who understands the value of software freedom and the evolutive benefits that stem from global collaborative development.

8) MariaDB has leapt in popularity

In 2013 there was news about Wikipedia migrating it’s enormous wiki system from MySQL to MariaDB and about Google using MariaDB in their internal systems instead of MySQL. One of the MariaDB Foundation sponsors is Automattic, the company behind Other notable examples are and Craigslist. Fedora and OpenSUSE have had MariaDB as the default SQL database option for years. With the releases of Red Hat Enterprise Linux 7 and SUSE Enterprise Linux 12 both these vendors ship MariaDB instead of MySQL and promises to support their MariaDB versions for the lifetime of the major distribution releases, that is up to 13 years.

The last big distribution to get MariaDB was Debian (and based on it, Ubuntu). The “intent to package” bug in Debian was already filed in 2010 but it wasn’t until December 2013 that the bug finally got closed. This was thanks to Seravo staff who took care of packaging MariaDB 5.5 for Debian, from where it also got into Ubuntu 14.04. Later we have also packaged MariaDB 10.0, which will be included in the next Debian and Ubuntu releases in the first half of 2015.

9) Compatible and easy to migrate

MariaDB 5.5 is a complete drop-in-replacement for MySQL 5.5. Migrating to MariaDB is as easy as running apt-get install mariadb-server or the equivalent command on your chosen Linux flavor (which, in 2015, is likely to include MariaDB in the official repositories).

Despite the migration being easy, we still recommend that database admins undertake their own testing and always back up their databases, just to be safe.

10) Migration might become difficult after 2015

In versions MariaDB 10.0 and MySQL 5.6 the forks have already started to diverge somewhat but most likely users can still just upgrade from 5.6 to 10.0 without problems. The compatibility between 5.7 and 10.1 in the future is unknown, so the ideal time to migrate is now while it is still hassle-free. If binary incompatibilities arise in the future, database admins can always still migrate their data by dumping it and importing it in the new database.

With the above in mind, MariaDB is clearly our preferred option.

One of our customers once expressed their interest in migrating from MySQL to MariaDB and wanted us to confirm whether MariaDB is bug-free. Tragically we had to disappoint them with a negative answer. However we did assure them that the most important things are done correctly in MariaDB making it certainly worth migrating to.

Thursday, 08 January 2015

FP-Day 2015

Jelle Hermsen » English | 12:18, Thursday, 08 January 2015

Tomorrow I will be giving a talk on visual programming languages at the University of Twente during FP-Day 2015. I will later post my slides and full text here, for your enjoyment.

Here’s the abstract for my talk:

Visual languages: functional programming in the era of jab and smoosh
Abstract: Why are we still typing our code? We have made the jump from keypunch machines to text terminals, but our programming hasn’t fully entered the bird slinging, candy crushing age of touchy swipy goodness. Are programmers the latest luddites or is it impossible to graphically capture the complexity of computing? In this talk we will look at visual programming and especially the way in which a pure functional approach might help create a general-purpose language that is both easy on the eyes and in touch with modern times.

Wednesday, 07 January 2015

Making the Best of a Bad Deal

Paul Boddie's Free Software-related blog » English | 22:33, Wednesday, 07 January 2015

I had the opportunity over the holidays to browse the January 2015 issue of “Which?” – the magazine of the Consumers’ Association in Britain – which, amongst other things, covered the topic of “technology ecosystems“. Which? has a somewhat patchy record when technology matters are taken into consideration: on the one hand, reviews consider the practical and often mundane aspects of gadgets such as battery life, screen brightness, and so on, continuing their tradition of giving all sorts of items a once over; on the other hand, issues such as platform choice and interoperability are typically neglected.

Which? is very much pitched at the “empowered consumer” – someone who is looking for a “good deal” and reassurances about an impending purchase – and so the overriding attitude is one that is often in evidence in consumer societies like Britain: what’s in it for me? In other words, what goodies will the sellers give me to persuade me to choose them over their competitors? (And aren’t I lucky that these nice companies are throwing offers at me, trying to win my custom?) A treatment of ecosystems should therefore be somewhat interesting reading because through the mere use of the term “ecosystem” it acknowledges that alongside the usual incentives and benefits that the readership is so keen to hear about, there are choices and commitments to be made, with potentially negative consequences if one settles in the wrong ecosystem. (Especially if others are hell-bent on destroying competing ecosystems in a “war” as former Nokia CEO Stephen Elop – now back at Microsoft, having engineered the sale of a large chunk of Nokia to, of course, Microsoft – famously threatened in another poor choice of imagery as part of what must be the one of the most insensitively-formulated corporate messages of recent years.)

Perhaps due to the formula behind such articles in Which? and similar arenas, some space is used to describe the benefits of committing to an ecosystem.  Above the “expert view” describing the hassles of switching from a Windows phone to an Android one, the title tells us that “convenience counts for a lot”. But the article does cover problems with the availability of applications and services depending on the platform chosen, and even the matter of having to repeatedly buy access to content comes up, albeit with a disappointing lack of indignance for a topic that surely challenges basic consumer rights. The conclusion is that consumers should try and keep their options open when choosing which services to use. Sensible and uncontroversial enough, really.

The Consequences of Apathy

But sadly, Which? is once again caught in a position of reacting to technology industry change and the resulting symptoms of a deeper malaise. When reviewing computers over the years, the magazine (and presumably its sister publications) always treated the matter of platform choice with a focus on “PCs and Macs” exclusively, with the latter apparently being “the alternative” (presumably in a feeble attempt to demonstrate a coverage of choice that happens to exist in only two flavours). The editors would most likely protest that they can only cover the options most widely available to buy in a big-name store and that any lack of availability of a particular solution – say, GNU/Linux or one of the freely available BSDs – is the consequence of a lack of consumer interest, and thus their readership would also be uninterested.

Such an unwillingness to entertain genuine alternatives, and to act in the interests of members of their audience who might be best served by those solutions, demonstrates that Which? is less of a leader in consumer matters than its writers might have us believe. Refusing to acknowledge that Which? can and does drive demand for alternatives, only to then whine about the bundled products of supposed consumer interest, demonstrates a form of self-imposed impotence when faced with the coercion of proprietary product upgrade schedules. Not everyone – even amongst the Which? readership – welcomes the impending vulnerability of their computing environment as another excuse to go shopping for shiny new toys, nor should they be thankful that Which? has done little or nothing to prevent the situation from occurring in the first place.

Thus, Which? has done as much as the rest of the mainstream technology press to unquestioningly sustain the monopolistic practices of the anticompetitive corporate repeat offender, Microsoft, with only a cursory acknowledgement of other platforms in recent years, qualified by remarks that Free Software alternatives such as GNU/Linux and LibreOffice are difficult to get started with or not what people are used to. After years of ignoring such products and keeping them marginalised, this would be the equivalent of denying someone the chance to work and then criticising them for not having a long list of previous employers to vouch for them on their CV.  Noting that 1.1-billion people supposedly use Microsoft Office (“one in seven people on the planet”) makes for a nice statistic in the sidebar of the print version of the article, but how many people have a choice in doing so or, for that matter, in using other Microsoft products bundled with computers (or foisted on office workers or students due to restrictive or corrupt workplace or institutional policies)? Which? has never been concerned with such topics, or indeed the central matter of anticompetitive software bundling, or its role in the continuation of such practices in the marketplace: strange indeed for a consumer advocacy publication.

At last, obliged to review a selection of fundamentally different ecosystem choices – as opposed to pretending that different vendor badges on anonymous laptops provide genuine choice – Which? has had to confront the practical problems brought about by an absence of interoperability: that consumers might end up stranded with a large, non-transferable investment in something they no longer wish to be a part of. Now, the involvement of a more diverse selection of powerful corporate interests have made such matters impossible to ignore. One gets the impression that for now at least, the publication cannot wish such things away and return to the lazy days of recommending that everyone line up and pay a single corporation their dues, refusing to believe that there could be anything else out there, let alone usable Free Software platforms.

Beyond Treating the Symptoms

Elsewhere in the January issue, the latest e-mail scam is revealed. Of course, a campaign to widen the adoption of digitally-signed mail would be a start, but that is probably too much to expect from Which?: just as space is dedicated to mobile security “apps” in this issue, countless assortments of antivirus programs have been peddled and reviewed in the past, but solving problems effectively requires a leader rather than a follower. Which? may do the tedious job of testing kettles, toasters, washing-up liquids, and much more to a level of thoroughness that would exhaust most people’s patience and interest. And to the publication’s credit, a certain degree of sensible advice is offered on topics such as online safety, albeit with the usual emphasis on proprietary software for the copy of Windows that members of its readership were all forced to accept. But in technology, Which? appears to be a mere follower, suggesting workarounds rather than working to build a fair market for safe, secure and interoperable products.

It is surely time for Which? to join the dots and to join other organisations in campaigning for fundamental change in the way technology is delivered by vendors and used throughout society. Then, by upholding a fair marketplace, interoperability, access to digital signature and encryption technologies, true ownership of devices and of purchased content, and all the things already familiar to Free Software and online rights advocates, they might be doing their readership a favour after all.

Python’s email Package and the “PGP/MIME” Question

Paul Boddie's Free Software-related blog » English | 22:33, Wednesday, 07 January 2015

I vaguely follow the development of Mailpile – the Free Software, Web-based e-mail client – and back in November 2014, there was a blog post discussing problems that the developers had experienced while working with PGP/MIME (or OpenPGP as RFC 3156 calls it). A discussion also took place on the Gnupg-users mailing list, leading to the following observation:

Yes, Mailpile is written in Python and I've had to bend over backwards
in order to validate and generate signatures. I am pretty sure I still
have bugs to work out there, trying to compensate for the Python
library's flaws without rewriting the whole thing is, long term, a
losing game. It is tempting to blame the Python libraries, but the fact
is that they do generate valid MIME - after swearing at Python for
months, it dawned on me that it's probably the PGP/MIME standard that is
just being too picky.

Later, Bjarni notes…

Similarly, when generating messages I had to fork the Python lib's
generator and disable various "helpful" hacks that were randomly
mutating the behavior of the generator if it detected it was generating
an encrypted message!

Coincidentally, while working on PGP/MIME messaging in another context, I also experienced some problems with the Python email package, mentioning them on the Mailman-developers mailing list because I had been reading that list and was aware that a Google Summer of Code project had previously been completed in the realm of message signing, thus offering a potential source of expertise amongst the list members. Although I don’t think I heard anything from the GSoC participant directly, I had the benefit of advice from the same correspondent as Bjarni, even though we have been using different mailing lists!

Here‘s what Bjarni learned about the “helpful” hacks:

This is supposed to be, which is
claimed to be resolved.

Unfortunately, the special-case handling of headers for “multipart/signed” parts is presumably of limited “help”, and other issues remain. As I originally noted

So, where the email module in Python 2.5 likes to wrap headers using tab
character indents, the module in Python 2.7 prefers to use a space for
indentation instead. This means that the module reformats data upon being
asked to provide a string representation of it rather than reporting exactly
what it received.

Why the special-casing wasn’t working for me remains unclear, and so my eventual strategy was to bypass the convenience method in the email API in order to assert some form of control over the serialisation of e-mail messages. It is interesting to note that the “fix” to the Python standard library involved changing the documentation to note the unsatisfactory behaviour and to leave the problem essentially unsolved. This may not have been unreasonable given the design goals of the email package, but it may have been better to bring the code into compliance with user expectations and to remedy what could arguably be labelled a design flaw of the software, even if it was an unintended one.

Contrary to the expectations of Python’s core development community, I still develop using Python 2 and probably won’t see any fixes to the standard library even if they do get made. So, here’s my workaround for message serialisation from my concluding message to the Mailman-developers list:

# given a message...
out = StringIO()
generator = Generator(out, False, 0) # disable reformatting measures
# out.getvalue() now provides the serialised message

It’s interesting to see such problems occur for different people a few months apart. Maybe I should have been following Mailpile development a bit more closely, but with it all happening at GitHub (with its supposedly amazing but, in my experience, rather sluggish and clumsy user interface), I suppose I haven’t been able to keep up.

Still, I hope that others experiencing similar difficulties become more aware of the issues by seeing this article. And I hope that Bjarni and the Mailpile developers haven’t completely given up on OpenPGP yet. We should all be working more closely together to get usable, Free, PGP-enabled, standards-compliant mail software to as many people as possible.

Marketing mechanics: first day at phpList

Sam Tuke » Free Software | 17:13, Wednesday, 07 January 2015

Today I start a new job writing Open Source marketing tools for phpList – the email newsletter sending app. Like MailChimp and SendGrid, phpList lets users design, write, send, and track emails to large numbers of people from a simple web interface. Specifically, I’ll be working on a new API to allow other apps to […]

The top-10 “open” tech and digital moments in 2014

Think. Innovation. » Blog | 15:21, Wednesday, 07 January 2015

What happened in a year of “free and open technology”? This post highlights the main “open” tech and digital moments in 2014 for you. This top-10 is the “open” alternative specifically to The Guardian’s “Top 10 tech and digital moments in 2014”, as well as to countless other 2014 top tech lists.

But before giving you the list, something that I noticed about The Guardian’s Tech Weekly top-10 podcast. For this podcast the tech editors and journalists of The Guardian got together to rate their top 10 moments in 2014 on a 1 to 5 stars scale. On average they never rated one of those moments above 3 stars! They all ended up with 1, 2 or 3 stars on average, with pretty blasé remarks on the side. And this was their own list of the top moments! What does that say about the relevance of the typical tech stuff (fluff?) covered in popular media? In any way an inspiration to make this alternative top-10.

So what were the top newsworthy moments in 2014 regarding open tech? This top-10 was top-of-mind:

1. UK Government adopts the Open Document Format (ODF)

Finally it happened, a country fully adopts open formats for all of its digital files. This is if course awesome news for LibreOffice, but also for many other projects providing open standards solutions, check out the LibreOffice interview at OSCON 2014. In The Netherlands, the government has been talking about it for years but no results so far. As we have a saying in The Netherlands: “When one sheep crosses the dam, more will follow.” So which country is next?

“Using an open standard will mean people won’t have costs imposed on them just to view or work with information from government.”

2. Kids learn how to code

The trend has been going on for several years, but now it is becoming more recognized, structured and widespread: children learning how to write software. Especially as a major international event has put this topic in the map in Europe. If you understand Dutch, check out this De Wereld Draait Door (DWDD) episode about the event, with Neelie Kroes and Jiami teaching herself how to code with ‘copy and paste’.

3. Mozilla is experiencing turbulence

Two major events for Mozilla: the public controversy around their change of CEO and their new deal with Yahoo instead of Google as the default search engine for FireFox. With FireFox not doing so well on mobile (yet), the existential dependency on a single business deal (90% of Mozilla’s $311m revenue came from Google in 2012, so now Yahoo) and FireFox OS not yet making a dent, I am curious to see how things will develop. Personally, I would love to see FireFox OS getting a chunk of market share, as a balance for Apple’s and Google’s enormous power in mobile OS.

4. Hardware running free and open source software ‘by design’ taking off

Of course, geeks have been able to run free and open source software on their computers and smartphones for a while. However, this has hardly been mainstream yet and the ‘default’ for products any person can buy. This seems to be changing with some interesting options, like the Librem laptop, a “Free/Libre Software Laptop That Respects Your Essential Freedoms”. And there is time to chip in and get one! At $1649 perhaps a bit pricy for the most of us. But if you can: buy it! This will stimulate more production and competition, lowering prices and increasing quality. Also, there is the GeeksPhone Revolution running FireFox OS that you can buy at 123 eur (ex VAT). And very promising: the first Ubuntu Phone will launch in February 2015.

5. Freedombox releases working software, for developers

The promise of the Freedombox project is to deliver a private and secure free and open source solution for running your own home server, providing a user-friendly experience. Release version 0.2 of the software became available, which should ‘actually work’ on the Dreamplug, Raspberry Pi or VirtualBox. Check out the hands-on review or give it a try. But remember, the release is still ‘for developers’, so don’t go depending on it for everyday use!

6. Indienet is taking us into the Stratosphere

Besides “open” alternatives for laptops, smartphones and home servers becoming available, Team Indie is taking on a different challenge. And an ambitious one! The goal is to replace the problematic “cloud” solutions of companies like Google and Facebook and create “Indienet” that provides privacy, security and user control by design. At the moment the team is crowdfunding for the development of their milestone projects Pulse, Heartbeat and Waystone. Simply put, a distributed system for file storage and sharing, messaging and social networking. And then there is the non-functional prototype for the phone as well!

7. Tesla opens its patents

What a remarkable and noble move of Elon Musk: electric car company Tesla has over 200 patents that are now free for everyone to use! The reason is simple and typical of a ‘goal driven’ (versus ‘profit driven’) company. Tesla set out to revolutionize the car industry and aspires to have every new car be ‘zero emission’. The company expected that the big players would jump on once Tesla had proven the demand for and possibility of great mass produced electric cars. But that was not the case at all: the big players have proven to be the old-fashioned, self-protecting dinosaurs that everyone already believed them to be. So then Tesla embraced “open”: inviting anyone to come along and develop those cars with their IP free of charge.

8. OSVehicle releases TABBY design and starts collecting pre-sales interest

Perhaps this one is even cooler than Tesla and its free patents: Italian company OSVehicle‘s open platform for electric cars. Despite the somewhat nerdy name “Open Source Vehicle” the initiative is just amazing. The company released the designs of the TABBY “open source hardware” chassis for anyone to build on and is polling pre-sales interest for purchasing the physical thing.

9. Outernet broadcasts “best of the internet” from space

[updated 21 January 2015]
Outernet could be the most ambitious endeavor in the list: launching a bunch of satellites to provide a “broadcast-internet” for everyone on the planet, free to use, forever! Yes, similar in goals to Google’s (bal)loons and Facebook’s satellite-laser-drones, with one tiny difference: it already works today! You can buy a “Lantern” receiver for it, or better yet, build your own with a Raspberry Pi and an old satellite dish. Note 1: one part of the Outernet software is not “open” at the moment. This concerns the part that transforms the incoming signals into the files. Outernet is looking for an “open” alternative or writing a new library when resources allow it. As I understand, it is a pragmatic choice they had to make and is not in any way an “openwashing” strategy.  Note 2: my start-up entered ATG Europe’s Fast Forward Program and ATG Europe is a shareholder in Outernet, so my inclusion of Outernet in the list is not independent and could be biased.

10. Which “open” moment is missing?

Oh my… I only made it to a top-9! Do you feel that a very important moment is missing in the list? Then please let me know! I am curious to here what you find noticeable in the “world of open”.

By the way: made a top-10 of free and open source projects in 2014, so check them out as well.

Many thanks to Nico Rikken, a fellow FSFE fellow (pun intended) for providing his top-5 top-of-mind “open” tech and digital moments with links, included in this post.

Monday, 05 January 2015

Third CI IRC meeting and more

Mario Fux | 21:00, Monday, 05 January 2015

There was another successful IRC meeting about KDE CI before xmas and you can read about it in the summary. For our third meeting we’d like to do another Doodle to get as many people from as many timezones as possible.

And in general we’re making quite some progress. Most of the work is currently done for a QStandardPaths upstream patch to get our apps running under MacOSX and Windows and there is even more to read about the amazing work our SoK student Scarlett Clark is doing.

Oh and don’t forget to help our new member GCompris to get some new looks.

flattr this!

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Vienna » English  Fellowship Interviews  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  I love it here » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Jonas Öberg  Karsten on Free Software  Leena Simon » » english  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Paul Boddie's Free Software-related blog » English  Pressreview  Saint's Log  Sam Tuke » Free Software  Sam Tuke's blog  Seravo  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Thoughts in Parentheses » Free Software  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes» Free Software  Valentin Rusu » fsfe  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos  pb's blog  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog