Planet Fellowship (en)

Sunday, 04 October 2015

Happy birthday to the Free Software Foundation

I LOVE IT HERE » English | 08:21, Sunday, 04 October 2015

A cake with the FSF30 birthday logo on it On 4 October 1985 Harold Abelson, Robert J. Chassell, Richard M. Stallman, Garald Jay Sussman, and Leonard H. Tower, Jr. incorporated the Free Software Foundation, Inc. The application included also the GNU Emacs General Public License, the GNU Manifesto, a list of software which was already written (Bison, MIT Schema, Hack, plus a list of several Unix utility replacements). In the application they wrote:

We believe that a good citizen shares all generally useful information with his [!sig now Richard would write "her"] neighbors who need it. Our hope is to encourage members of the public to cooperate with each other by sharing software and other useful information.

One of the major influences currently discouraging such sharing is the pratice where information is “owned” by someone who permits a member of the public to have the information himself only on condition of refusing to share it with anyone else.

Our free software will provide the public with an alternative to agreeing to such conditions. By refusing the terms of commercial software and using our software instead, people will remain free to be good neighbors.

In addition, the virtues of self-reliance and independent initiative will be furthered because users of our software will have the plans with which to repair or change it.

The documents at that time still focused on non-commercial software. Later it was clarified that Free Software can also be commercial software.

But else the mission did not change much. What changed is that nowadays we have much more computers around us than people in 1985 could have imagined, and it is deeply involved in all aspects of our lives. It is even more important today than at that time that this technology empowers rather than restricts us.

Free Software gives every person the rights to use, study, share and improve software. During the years we realised that these rights also help to support other fundamental rights like freedom of speech, freedom of press and privacy.

Today computer owners are often not allowed to modify hard- and software of their computers anymore, and people often use other people’s computers for a lot of daily tasks, it is now more important than ever that we have organisations like the FSFs, who work for computer users’ right.

As the President of its European sister organisation I am happy to congratulate: Happy birthday dear Free Software Foundation!!! (Now we can sing that song again.)

And thanks to all of you out there who support the software freedom movement and thereby giving us the strength we need for our future challenges!

Friday, 02 October 2015

Want to be selected for Google Summer of Code 2016? - fsfe | 16:41, Friday, 02 October 2015

I've mentored a number of students in 2013, 2014 and 2015 for Debian and Ganglia and most of the companies I've worked with have run internships and graduate programs from time to time. GSoC 2015 has just finished and with all the excitement, many students are already asking what they can do to prepare and be selected for Outreachy or GSoC in 2016.

My own observation is that the more time the organization has to get to know the student, the more confident they can be selecting that student. Furthermore, the more time that the student has spent getting to know the free software community, the more easily they can complete GSoC.

Here I present a list of things that students can do to maximize their chance of selection and career opportunities at the same time. These tips are useful for people applying for GSoC itself and related programs such as GNOME's Outreachy or graduate placements in companies.


There is no guarantee that Google will run the program again in 2016 or any future year until the Google announcement.

There is no guarantee that any organization or mentor (including myself) will be involved until the official list of organizations is published by Google.

Do not follow the advice of web sites that invite you to send pizza or anything else of value to prospective mentors.

Following the steps in this page doesn't guarantee selection. That said, people who do follow these steps are much more likely to be considered and interviewed than somebody who hasn't done any of the things in this list.

Understand what free software really is

You may hear terms like free software and open source software used interchangeably.

They don't mean exactly the same thing and many people use the term free software for the wrong things. Not all projects declaring themselves to be "free" or "open source" meet the definition of free software. Those that don't, usually as a result of deficiencies in their licenses, are fundamentally incompatible with the majority of software that does use genuinely free licenses.

Google Summer of Code is about both writing and publishing your code and it is also about community. It is fundamental that you know the basics of licensing and how to choose a free license that empowers the community to collaborate on your code well after GSoC has finished.

Please review the definition of free software early on and come back and review it from time to time. The The GNU Project / Free Software Foundation have excellent resources to help you understand what a free software license is and how it works to maximize community collaboration.

Don't look for shortcuts

There is no shortcut to GSoC selection and there is no shortcut to GSoC completion.

The student stipend (USD $5,500 in 2014) is not paid to students unless they complete a minimum amount of valid code. This means that even if a student did find some shortcut to selection, it is unlikely they would be paid without completing meaningful work.

If you are the right candidate for GSoC, you will not need a shortcut anyway. Are you the sort of person who can't leave a coding problem until you really feel it is fixed, even if you keep going all night? Have you ever woken up in the night with a dream about writing code still in your head? Do you become irritated by tedious or repetitive tasks and often think of ways to write code to eliminate such tasks? Does your family get cross with you because you take your laptop to Christmas dinner or some other significant occasion and start coding? If some of these statements summarize the way you think or feel you are probably a natural fit for GSoC.

An opportunity money can't buy

The GSoC stipend will not make you rich. It is intended to make sure you have enough money to survive through the summer and focus on your project. Professional developers make this much money in a week in leading business centers like New York, London and Singapore. When you get to that stage in 3-5 years, you will not even be thinking about exactly how much you made during internships.

GSoC gives you an edge over other internships because it involves publicly promoting your work. Many companies still try to hide the potential of their best recruits for fear they will be poached or that they will be able to demand higher salaries. Everything you complete in GSoC is intended to be published and you get full credit for it. Imagine a young musician getting the opportunity to perform on the main stage at a rock festival. This is how the free software community works. It is a meritocracy and there is nobody to hold you back.

Having a portfolio of free software that you have created or collaborated on and a wide network of professional contacts that you develop before, during and after GSoC will continue to pay you back for years to come. While other graduates are being screened through group interviews and testing days run by employers, people with a track record in a free software project often find they go straight to the final interview round.

Register your domain name and make a permanent email address

Free software is all about community and collaboration. Register your own domain name as this will become a focal point for your work and for people to get to know you as you become part of the community.

This is sound advice for anybody working in IT, not just programmers. It gives the impression that you are confident and have a long term interest in a technology career.

Choosing the provider: as a minimum, you want a provider that offers DNS management, static web site hosting, email forwarding and XMPP services all linked to your domain. You do not need to choose the provider that is linked to your internet connection at home and that is often not the best choice anyway. The XMPP foundation maintains a list of providers known to support XMPP.

Create an email address within your domain name. The most basic domain hosting providers will let you forward the email address to a webmail or university email account of your choice. Configure your webmail to send replies using your personalized email address in the From header.

Update your ~/.gitconfig file to use your personalized email address in your Git commits.

Create a web site and blog

Start writing a blog. Host it using your domain name.

Some people blog every day, other people just blog once every two or three months.

Create links from your web site to your other profiles, such as a Github profile page. This helps reinforce the pages/profiles that are genuinely related to you and avoid confusion with the pages of other developers.

Many mentors are keen to see their students writing a weekly report on a blog during GSoC so starting a blog now gives you a head start. Mentors look at blogs during the selection process to try and gain insight into which topics a student is most suitable for.

Create a profile on Github

Github is one of the most widely used software development web sites. Github makes it quick and easy for you to publish your work and collaborate on the work of other people. Create an account today and get in the habbit of forking other projects, improving them, committing your changes and pushing the work back into your Github account.

Github will quickly build a profile of your commits and this allows mentors to see and understand your interests and your strengths.

In your Github profile, add a link to your web site/blog and make sure the email address you are using for Git commits (in the ~/.gitconfig file) is based on your personal domain.

Start using PGP

Pretty Good Privacy (PGP) is the industry standard in protecting your identity online. All serious free software projects use PGP to sign tags in Git, to sign official emails and to sign official release files.

The most common way to start using PGP is with the GnuPG (GNU Privacy Guard) utility. It is installed by the package manager on most Linux systems.

When you create your own PGP key, use the email address involving your domain name. This is the most permanent and stable solution.

Print your key fingerprint using the gpg-key2ps command, it is in the signing-party package on most Linux systems. Keep copies of the fingerprint slips with you.

This is what my own PGP fingerprint slip looks like. You can also print the key fingerprint on a business card for a more professional look.

Using PGP, it is recommend that you sign any important messages you send but you do not have to encrypt the messages you send, especially if some of the people you send messages to (like family and friends) do not yet have the PGP software to decrypt them.

If using the Thunderbird (Icedove) email client from Mozilla, you can easily send signed messages and validate the messages you receive using the Enigmail plugin.

Get your PGP key signed

Once you have a PGP key, you will need to find other developers to sign it. For people I mentor personally in GSoC, I'm keen to see that you try and find another Debian Developer in your area to sign your key as early as possible.

Free software events

Try and find all the free software events in your area in the months between now and the end of the next Google Summer of Code season. Aim to attend at least two of them before GSoC.

Look closely at the schedules and find out about the individual speakers, the companies and the free software projects that are participating. For events that span more than one day, find out about the dinners, pub nights and other social parts of the event.

Try and identify people who will attend the event who have been GSoC mentors or who intend to be. Contact them before the event, if you are keen to work on something in their domain they may be able to make time to discuss it with you in person.

Take your PGP fingerprint slips. Even if you don't participate in a formal key-signing party at the event, you will still find some developers to sign your PGP key individually. You must take a photo ID document (such as your passport) for the other developer to check the name on your fingerprint but you do not give them a copy of the ID document.

Events come in all shapes and sizes. FOSDEM is an example of one of the bigger events in Europe, is a similarly large event in Australia. There are many, many more local events such as the Debian UK mini-DebConf in Cambridge, November 2015. Many events are either free or free for students but please check carefully if there is a requirement to register before attending.

On your blog, discuss which events you are attending and which sessions interest you. Write a blog during or after the event too, including photos.

Quantcast generously hosted the Ganglia community meeting in San Francisco, October 2013. We had a wild time in their offices with mini-scooters, burgers, beers and the Ganglia book. That's me on the pink mini-scooter and Bernard Li, one of the other Ganglia GSoC 2014 admins is on the right.

Install Linux

GSoC is fundamentally about free software. Linux is to free software what a tree is to the forest. Using Linux every day on your personal computer dramatically increases your ability to interact with the free software community and increases the number of potential GSoC projects that you can participate in.

This is not to say that people using Mac OS or Windows are unwelcome. I have worked with some great developers who were not Linux users. Linux gives you an edge though and the best time to gain that edge is now, while you are a student and well before you apply for GSoC.

If you must run Windows for some applications used in your course, it will run just fine in a virtual machine using Virtual Box, a free software solution for desktop virtualization. Use Linux as the primary operating system.

Here are links to download ISO DVD (and CD) images for some of the main Linux distributions:

If you are nervous about getting started with Linux, install it on a spare PC or in a virtual machine before you install it on your main PC or laptop. Linux is much less demanding on the hardware than Windows so you can easily run it on a machine that is 5-10 years old. Having just 4GB of RAM and 20GB of hard disk is usually more than enough for a basic graphical desktop environment although having better hardware makes it faster.

Your experiences installing and running Linux, especially if it requires some special effort to make it work with some of your hardware, make interesting topics for your blog.

Decide which technologies you know best

Personally, I have mentored students working with C, C++, Java, Python and JavaScript/HTML5.

In a GSoC program, you will typically do most of your work in just one of these languages.

From the outset, decide which language you will focus on and do everything you can to improve your competence with that language. For example, if you have already used Java in most of your course, plan on using Java in GSoC and make sure you read Effective Java (2nd Edition) by Joshua Bloch.

Decide which themes appeal to you

Find a topic that has long-term appeal for you. Maybe the topic relates to your course or maybe you already know what type of company you would like to work in.

Here is a list of some topics and some of the relevant software projects:

  • System administration, servers and networking: consider projects involving monitoring, automation, packaging. Ganglia is a great community to get involved with and you will encounter the Ganglia software in many large companies and academic/research networks. Contributing to a Linux distribution like Debian or Fedora packaging is another great way to get into system administration.
  • Desktop and user interface: consider projects involving window managers and desktop tools or adding to the user interface of just about any other software.
  • Big data and data science: this can apply to just about any other theme. For example, data science techniques are frequently used now to improve system administration.
  • Business and accounting: consider accounting, CRM and ERP software.
  • Finance and trading: consider projects like R, market data software like OpenMAMA and connectivity software (Apache Camel)
  • Real-time communication (RTC), VoIP, webcam and chat: look at the JSCommunicator or the Jitsi project
  • Web (JavaScript, HTML5): look at the JSCommunicator

Before the GSoC application process begins, you should aim to learn as much as possible about the theme you prefer and also gain practical experience using the software relating to that theme. For example, if you are attracted to the business and accounting theme, install the PostBooks suite and get to know it. Maybe you know somebody who runs a small business: help them to upgrade to PostBooks and use it to prepare some reports.

Make something

Make some small project, less than two week's work, to demonstrate your skills. It is important to make something that somebody will use for a practical purpose, this will help you gain experience communicating with other users through Github.

For an example, see the servlet Juliana Louback created for fixing phone numbers in December 2013. It has since been used as part of the Lumicall web site and Juliana was selected for a GSoC 2014 project with Debian.

There is no better way to demonstrate to a prospective mentor that you are ready for GSoC than by completing and publishing some small project like this yourself. If you don't have any immediate project ideas, many developers will also be able to give you tips on small projects like this that you can attempt, just come and ask us on one of the mailing lists.

Ideally, the project will be something that you would use anyway even if you do not end up participating in GSoC. Such projects are the most motivating and rewarding and usually end up becoming an example of your best work. To continue the example of somebody with a preference for business and accounting software, a small project you might create is a plugin or extension for PostBooks.

Getting to know prospective mentors

Many web sites provide useful information about the developers who contribute to free software projects. Some of these developers may be willing to be a GSoC mentor.

For example, look through some of the following:

Getting on the mentor's shortlist

Once you have identified projects that are interesting to you and developers who work on those projects, it is important to get yourself on the developer's shortlist.

Basically, the shortlist is a list of all students who the developer believes can complete the project. If I feel that a student is unlikely to complete a project or if I don't have enough information to judge a student's probability of success, that student will not be on my shortlist.

If I don't have any student on my shortlist, then a project will not go ahead at all. If there are multiple students on the shortlist, then I will be looking more closely at each of them to try and work out who is the best match.

One way to get a developer's attention is to look at bug reports they have created. Github makes it easy to see complaints or bug reports they have made about their own projects or other projects they depend on. Another way to do this is to search through their code for strings like FIXME and TODO. Projects with standalone bug trackers like the Debian bug tracker also provide an easy way to search for bug reports that a specific person has created or commented on.

Once you find some relevant bug reports, email the developer. Ask if anybody else is working on those issues. Try and start with an issue that is particularly easy and where the solution is interesting for you. This will help you learn to compile and test the program before you try to fix any more complicated bugs. It may even be something you can work on as part of your academic program.

Find successful projects from the previous year

Contact organizations and ask them which GSoC projects were most successful. In many organizations, you can find the past students' project plans and their final reports published on the web. Read through the plans submitted by the students who were chosen. Then read through the final reports by the same students and see how they compare to the original plans.

Start building your project proposal now

Don't wait for the application period to begin. Start writing a project proposal now.

When writing a proposal, it is important to include several things:

  • Think big: what is the goal at the end of the project? Does your work help the greater good in some way, such as increasing the market share of Linux on the desktop?
  • Details: what are specific challenges? What tools will you use?
  • Time management: what will you do each week? Are there weeks where you will not work on GSoC due to vacation or other events? These things are permitted but they must be in your plan if you know them in advance. If an accident or death in the family cut a week out of your GSoC project, which work would you skip and would your project still be useful without that? Having two weeks of flexible time in your plan makes it more resilient against interruptions.
  • Communication: are you on mailing lists, IRC and XMPP chat? Will you make a weekly report on your blog?
  • Users: who will benefit from your work?
  • Testing: who will test and validate your work throughout the project? Ideally, this should involve more than just the mentor.

If your project plan is good enough, could you put it on Kickstarter or another crowdfunding site? This is a good test of whether or not a project is going to be supported by a GSoC mentor.

Learn about packaging and distributing software

Packaging is a vital part of the free software lifecycle. It is very easy to upload a project to Github but it takes more effort to have it become an official package in systems like Debian, Fedora and Ubuntu.

Packaging and the communities around Linux distributions help you reach out to users of your software and get valuable feedback and new contributors. This boosts the impact of your work.

To start with, you may want to help the maintainer of an existing package. Debian packaging teams are existing communities that work in a team and welcome new contributors. The Debian Mentors initiative is another great starting place. In the Fedora world, the place to start may be in one of the Special Interest Groups (SIGs).

Think from the mentor's perspective

After the application deadline, mentors have just 2 or 3 weeks to choose the students. This is actually not a lot of time to be certain if a particular student is capable of completing a project. If the student has a published history of free software activity, the mentor feels a lot more confident about choosing the student.

Some mentors have more than one good student while other mentors receive no applications from capable students. In this situation, it is very common for mentors to send each other details of students who may be suitable. Once again, if a student has a good Github profile and a blog, it is much easier for mentors to try and match that student with another project.

GSoC logo generic


Getting into the world of software engineering is much like joining any other profession or even joining a new hobby or sporting activity. If you run, you probably have various types of shoe and a running watch and you may even spend a couple of nights at the track each week. If you enjoy playing a musical instrument, you probably have a collection of sheet music, accessories for your instrument and you may even aspire to build a recording studio in your garage (or you probably know somebody else who already did that).

The things listed on this page will not just help you walk the walk and talk the talk of a software developer, they will put you on a track to being one of the leaders. If you look over the profiles of other software developers on the Internet, you will find they are doing most of the things on this page already. Even if you are not selected for GSoC at all or decide not to apply, working through the steps on this page will help you clarify your own ideas about your career and help you make new friends in the software engineering community.

Wednesday, 30 September 2015

My Wishion of KDE – Prelude

Mario Fux | 23:04, Wednesday, 30 September 2015

Now or never – that’s what I thought yesterday when I wrote the notes for this blog post down. Although it might be a bit too dramatic. Yesterday night frustrated me and even made me angry about things going on in KDE. And thus I think and thought it’s finally time to start my blog series. Over the next weeks I’d like to write a blog series with the title My Wishion of KDE. Over the last two years I collected many thoughts and notes and with this blog post I’ll put some more pressure on myself to finally arrange these notes and ideas and finish the blog posts and publish them.

And I want to do this as well because most of the time I prefer to improve things and progress them, fix them, do the work rather than just rant and criticize (destructively, constructive criticism is great and important) and it’s Free Software in its core as well: just do it (others might or will follow).

There are quite some things failing inside KDE but I heard there was quite some positive energy this year at Akademy and we just finished/ended the longest Randa Meetings yet and although these meetings were for me quite exhausting they were another great success and almost the size of half of Akademy this year. Just compare the group pictures. And I met again some great people, new and old, young and old, with great ideas, a lot of energy and willingness to put in some energy and with the incubator projects I sponsor there was and is even more enthusiasm coming to KDE.

So this is the introduction or first part of my series about my Wishion of KDE. With this word creation I’d like to underline the ambiguity of my wishes for KDE on one side and a vision for KDE on the other.

The herewith starting or preluding blog series will have four parts. And the following questions will guide these single blog posts:

  • Where are we now? What is KDE currently?
  • Where and how do I see KDE in the future and the coming years? And where do I think KDE should go?
  • What should we do and change to reach this future?
  • How could we do this and what to achieve this wishion? What could I offer and how could you support me in this?

As I wrote this is my Wishion and thus just MHO (my humble opinion). Nonetheless I think that with the work I did in KDE, in several areas and for quite some time I’ve got a good overview and insight and thus think my Wishion might be of some interest for other people too.

If you like this, please spread the word, translate this blog sories, ask, comment, tweet and Co (I’m not the “social media” guy although I think that I’m quite social and use IT for almost two decades now in a very social way).

Read you soon. At least next week.

flattr this!

Monday, 28 September 2015

Organizing this year’s LibreOffice conference

agger's Free Software blog | 07:43, Monday, 28 September 2015

Conference opening

Leif Lodahl opening the conference, emphasizing among other things that LibreOffice is about free software - as in freedom.

At the instigation of my colleague Leif Lodahl, I had the honour of co-organizing this year’s LibreOffice conference here in Aarhus. It was an amazing experience to see the community behind one of the world’s most important free software projects, a system with many millions of users that is a cornerstone of any attempt to migrate existing organisations away from proprietary software.

As an organizer I didn’t really have time to speak with anyone, but I did meet a lot of good people – far too many to mention, but including The Document Foundation’s director Florian Effenberger, FSFE Hamburg coordinator Eike Rathke and former FSFE employee Sam Tuke. If I manage to go to FOSDEM next year, I hope to meet some of the people again and maybe this time have a chance to talk.

There were a lot of interesting talks, many of them about some quite advanced subjects, including some of the more esoteric aspects and uses of C++11. The event was kindly hosted by the municipality of Aarhus in its new, high-tech library building Dokk1.

All in all, the conference seemed to be a success, with three days packed with talks, hackfest and other social events in the evenings and about 150 registered attendees from at least 33 countries.

See also: Announcing the LibreOffice Conference for 2015 in Aarhus, Denmark

Relaxing at LiboCon

Relaxing with a beer after the first day of the conference.

Preparing hackfest and dinner party

Preparations for the hackfest and dinner party - the latter part kindly hosted by the University of Aarhus

Attendees listening to a talk.


Saturday, 26 September 2015

Random Questions about Fairphone Source Code Availability

Paul Boddie's Free Software-related blog » English | 12:40, Saturday, 26 September 2015

I was interested to read the recent announcement about source code availability for the first Fairphone device. I’ve written before about the threat to that device’s continued viability and Fairphone’s vague position on delivering a device that properly supports Free Software. It is nice to see that the initiative takes such matters seriously and does not seem to feel that letting its partners serve up what they have lying around is sufficient. However, a few questions arise, starting with the following quote from the announcement:

We can happily say that we have recently obtained a software license from all our major partners and license holders that allows us to modify the Fairphone 1 software and release new versions to our users. Getting that license also required us to obtain rights to use and distribute Mentor Graphics’s RTOS used on the phone. (We want to thank Mentor Graphics in making it possible for us to acquire the distribution license for their RTOS, as well as other partners for helping us achieve this.)

I noted before that various portions of the software are already subject to copyleft licensing, but if we ignore those (and trust that the sources were already being made available), it is interesting to consider the following questions:

  • What is “the Fairphone 1 software” exactly?
  • Fairphone may modify the software but what about its customers?
  • What role does the Mentor Graphics RTOS have? Can it be replaced by customers with something else?
  • Do the rights to use and distribute the RTOS extend to customers?
  • Do those rights extend to the source code of the RTOS, and do those rights uphold the four freedoms?

On further inspection, some contradictions emerge, perhaps most efficiently encapsulated by the following quote:

Now that Fairphone has control over the Fairphone 1 source code, what’s next? First of all, we can say that we have no plans to stop supporting the Fairphone hardware. We will continue to apply security fixes as long as it is feasible for the years to come. We will also keep exploring ways to increase the longevity of the Fairphone 1. Possibilities include upgrading to a more recent Android version, although we would like to manage expectations here as this is still very much a longshot dependent on cooperation from license holders and our own resources.

If Fairphone has control over the source code, why is upgrading to a more recent Android version dependent on cooperation with licence holders? If Fairphone “has control” then the licence holders should already have provided the necessary permissions for Fairphone to actually take control, just as one would experience with the four freedoms. One wonders which permissions have been withheld and whether these are being illegitimately withheld for software distributed under copyleft licences.

With a new device in the pipeline, I respect the persistence of Fairphone in improving the situation, but perhaps the following quote summarises the state of the industry and the struggle for sustainable licensing and licence compliance:

It is rather unusual for a small company like Fairphone to get such a license (usually ODMs get these and handle most of the work for their clients) and it is uncommon that a company attempts and manages to obtain such a license towards the end of the economic life cycle of the product.

Sadly, original design manufacturers (ODMs) have a poor reputation: often being known for throwing binaries over the wall whilst being unable or unwilling to supply the corresponding sources, with downstream manufacturers and retailers claiming that they have no leverage to rectify such licence violations. Although the injustices and hardships of those working to supply the raw materials for products like the Fairphone, along with those of the people working to construct the devices themselves, make other injustices seem slight – thinking especially of those experienced by software developers whose copyright is infringed by dubious industry practices – dealing with unethical and untidy practices wherever they may be found should be part of the initiative’s objectives.

From what I’ve seen and heard, Fairphone 2 should have a better story for chipset support and Free Software, but again, an inspection of the message raises some awkward questions. For example:

In the coming months we are going to launch several programs that address different aspects of creating fairer software. For now, one of the best tools for us to reach these goals is to embrace open source principles. With this in mind and without further ado, we’re excited to announce that we are going to release the complete build environment for Fairphone OS on Fairphone 2, which contains the full open source code, all the tools and the binary blobs that will allow users to build their own Fairphone OS.

To be fair, binary blobs are often difficult to avoid: desktop computers often use them for various devices, and even devices like the Neo900 that emphasise a completely Free Software stack will end up using them for certain functions (mitigating this by employing other technical measures). Making the build environment available is a good thing: frequently, this aspect is overlooked and anyone requesting the source code can be left guessing about build configuration details in an exercise that is effectively a matter of doing the vendor’s licence compliance work for them. But here, we are left wondering where the open source code ends, where binary blobs will be padding out the distribution, and what those blobs are actually for.

We need to keep asking difficult questions about such matters even if what Fairphone is doing is worthy in its own right. Not only does it safeguard the interests of the customers involved, but it also helps Fairphone to focus on doing the right thing. It does feel unkind to criticise what seems like a noble initiative for not doing more when they obviously try rather hard to do the right thing in so many respects. But by doing the right thing in terms of the software as well, Fairphone can uphold its own reputation and credibility: something that all businesses need to remember, as certain very large companies have very recently discovered.

Why we convinced a Dutch government agency to use an Open Document format

André on Free Software » English | 10:57, Saturday, 26 September 2015

When it comes to the use of Open Document formats in the public administration of the Netherlands there is no law. There is the “apply or explain”-rule which among other things means that a public administration has to use Open Standards unless they specifically explain why they can’t. As this rule has no teeth, all you can do is to politely ask a civil servant to use Open Standards.

Which we did. The Antenna Office, part of the Telecom Agency, regularly publishes a document with all legal antenna systems in the country. They did this in a non-free spreadsheet document format. After a tip from Kevin Keijzer, I politely requested them to change this. I got a fast reaction, stating that after receiving several similar requests, they decided now to change to .ods immediately with their next publication.

More information on Open Document formats is on the Document Freedom Day website.

Friday, 25 September 2015

FSFE Info booth at Rotlintstraßenfest in Frankfurt

Being Fellow #952 of FSFE » English | 12:52, Friday, 25 September 2015

Last week Saturday on Software Freedom Day, we had our first outdoor booth at a street festival (Rotlintstraßenfest) in Frankfurt. We got the exact location of the booth  just the day before. After work, I jumped on my bike and had a look at the scene.

proposed location of booth

My site visit a day earlier (I still have to work on my Gimp skills ;)

We arrived in the morning and took our time to set up the pavilion and our new banner.

The friendlyness and willingness to help each other at the festival was remarkable. We got electrical power and an extension cord from the tenants of the house behind us. We lent them our 30m cable drum in return so they could start selling their vegan waffles. The barber 20m down the road allowed us to place a Freifunk router in his shop which then meshed with the outdoor router at our booth.

The booth was manned with three people which was still sufficient to allow us to take turns grabbing something to eat and have a look at the other booths in the beginning. We startet to get busy explaining Free Software to people even while we were still setting up the booth, but shortly after the event officially started, we had no quiet minute anymore until a heavy shower washed all visitors from the street for about 30 minutes. I actually forgot to finish the setup of the booth that way.

We offered some mothers with their strolers shelter under our pavilion and helped others covering their material. Then it turned out that our pavilion was more a device designed to filter big single rain drops and turn them into a fine spray. We lost about 20 leaflets on top of the stacks. They got soaking wet by the time we realized what’s going on and could save the rest of our stock.

Umbrella underneath the pavilion

Umbrella underneath the pavilion


But as soon as the rain stopped, people returned as quickly and the three of us continued informing them about Free Software and the FSFE. Sometimes the people even lined up or squeezed their arms between the people we were talking with to grab a leaflet for themselfes.

Besides the standard leaflets anyone can order free of charge (with the option to make a donation – nudge, nudge) we had a modified version of the handouts the Munich group created. Links to the PDF versions are on our wiki page, soucre code is also available of course (under CC0).

The stream of interested people didn’t stop. It was after 20:00hrs when I decided to start dismantling the booth while Thomas was still talking to a couple with a bunch of questions. They kept talking until almost everything was packed and ready for transport. The couple helped picking up the last few items and helped us carrying all the material to an area where cars were allowed again. Did I mention that it was a remarkable friendly athmosphere already?

Despite the heavy rain shower and a few drizzling moments, the weather was much better than predicted. Looking at pictures from previous years, the weather forecast probably kept a lot of people from coming. But as I said, we were quite busy serving those who did come!

We also learned a few practical lessons for future booths. As time allows, I’ll feed the wiki page with it.

So, there is not much more to say that this booth was an enormous success for our group. Apparently, we were visible enough to be noted by a journalists who mentioned “Free Software” in his report about the festival in a local newspaper.

Thanks to Björn for suggesting the booth at this nice event in the first place, creating the first contact with the organizers, organizing the internet connection and stay with us at the booth even though he was in pain. And also a big thank you to Thomas, who took part in a workshop with our friends at the CCC Frankfurt where he handicrafted the Freifunk outdoor router for our booth. This allowed us to present one of the many nice things one can do with Free Software. Most visitors already heard about Freifunk in connection with the refugee shelters in the area. And also thanks to Sven, who would have loved to come but couldn’t attend the booth due to unforeseen circumstances.



boot with material for the booth one of the first visitors Freifunk available via an outdoor router The localized handouts IMG_20150919_133055 FSFE booth on the right IMG_20150919_162644 people coming back after the rain booth work until way after sunset


flattr this!

Wednesday, 23 September 2015

How to travel the intergalactic way - a Free Software trip to Kiel

Mäh? | 20:39, Wednesday, 23 September 2015

As every year on the third Saturday in the month, the Software Freedom Day also took place this year of course. One of the places you can go on that day is the Kieler Linuxtage which took place on the 18th and 9th of September this year.

Since 2010 I'm having a small trip from Hamburg to Kiel to represent the Free Software Foundation Europe there with a small booth. In 2014 there was also a talk about the FreeYourAndroid campaign, this year I even extended this and, beneath the talk, introduced F-Droid to the visitors in a small workshop.

As you see, things are developing, even the Fellowship Meetings in Hamburg grow (our last meeting on the 14th of September hosted 9 people!). Lately in August we had a new guy around who moved from Darmstadt to Hamburg and he said he would join me for the trip to Kiel. So this year I wasn't travelling to Kiel alone for the first time. A big thanks to Pascal [2] for supporting me - and in special for supporting the FSFE and Free Software!

In fact, without Pascal I wouldn't have had that much to tell about the applications which you can get in F-Droid as he pointed out a lot of apps to get there and supported the workshop as well.

At such Free Software events you really learn what the term "Community" means, you even meet guys from the local Freifunk in Kiel where you find a place to stay for the night - well, in fact, this bed was reserved for Edward Snowden, but as he wasn't there that night… - you see…

The local hackers from Kiel called Toppoint are even great as well, they have the possibility to create intergalactic traveller's passports for you. As I already had one, I created an intergalactic diplomatic
passport this year, originally in blue and you can trust - this is a real document with security hints in it which you can only see in the black light! Here are some pictures of it:

All in all, we can say this was a small but nice successful free software event and I am sure I will go there for the next five years (at least) as well.

By the way, you are welcome to join us at one of our FSFE Fellowship Meetings in

The only way to ensure the VW scandal never happens again - fsfe | 07:48, Wednesday, 23 September 2015

The automotive industry has been in the spotlight after a massive scandal at Volkswagen, using code hidden in the engine management software to cheat emissions tests.

What else is hidden in your new car's computer?

Every technology we use in our lives is becoming more computerized, even light bulbs and toilet seats.

In a large piece of equipment like a car, there are many opportunities for computerization. In most cases, consumers aren't even given a choice whether or not they want software in their car.

It has long been known that such software is spying on the habits of the driver and this data is extracted from the car when it is serviced and uploaded to the car company. Car companies are building vast databases about the lifestyles, habits and employment activities of their customers.

Computers aren't going away, so what can be done?

Most people realize that computers aren't going to go away any time soon. That doesn't mean that people have to put up with these deceptions and intrusions on our lives.

For years, many leading experts in the software engineering world have been promoting the benefits and principles of free software.

What we mean by free is that users, regulators and other independent experts should have the freedom to see and modify the source code in the equipment that we depend on as part of modern life. In fact, experts generally agree that there is no means other than software freedom to counter the might of corporations like Volkswagen and their potential to misuse that power, as demonstrated in the emissions testing scandal.

If Governments and regulators want to be taken seriously and protect society, isn't it time that they insisted that the car industry replaces all hidden code with free and open source software?

Tuesday, 22 September 2015

Why I won’t buy the Purism Librem

tobias_platen's blog | 18:33, Tuesday, 22 September 2015

Purism claims to respect your privacy, security, and freedom. Unfortunately those laptops use the latest Intel hardware which requires non-free blobs in order to boot. This makes it impossible to run libreboot on these laptops. These computers come with the Intel Management Engine (ME), which is a serious threat to users freedom and privacy. The claim to be able to unlock the ME, so that one can use those computers without that disloyal technology. Because the boot-firmware is non-free, they won’t be able to respect the users freedom. PureOS 2.0 seems to be based on Debian GNU/Linux which is a non-free distribution. Older versions of PureOS were based on Trisquel GNU/Linux, which I use since 2013 on a ThinkPenguin laptop. Some ThinkPenguin products respect the users freedom, but their laptops do not, because their boot firmware is non-free. The also talk about “Free and Open Source Software”, but Open Source misses the point of free software. Because software freedom is important for me, I decided to buy the Libreboot X200 that Respects Your Freedom. That GNU-Free laptop is also much cheaper than the Purism Librem.
Freedom status of Purism and Minifree

Monday, 21 September 2015

Skype is broken! GNUnet Conversation will restore user freedom and privacy

tobias_platen's blog | 19:43, Monday, 21 September 2015

Skype is an centralized service that requires a nonfree client program. Because of the centralisation it is easy to eavesdrop on calls. Because the software is nonfree the developer has power over the users. Insted of using Skype one should use Free Software replacement such as GNUnet Conversation which provides end-to-end encryption. Decentralized networks such as GNUnet are also more resilent against outages because every user can run his own personal node.

Skype outage? reSIProcate to the rescue! - fsfe | 17:19, Monday, 21 September 2015

On Friday, the reSIProcate community released the latest beta of reSIProcate 1.10.0. One of the key features of the 1.10.x release series is support for presence (buddy/status lists) over SIP, the very thing that is currently out of action in Skype. This is just more proof that free software developers are always anticipating users' needs in advance.

reSIProcate 1.10.x also includes other cool things like support for PostgreSQL databases and Perfect Forward Secrecy on TLS.

Real free software has real answers

Unlike Skype, reSIProcate is genuine free software. You are free to run it yourself, on your own domain or corporate network, using the same service levels and support strategies that are important for you. That is real freedom.

Not sure where to start?

If you have deployed web servers and mail servers but you are not quite sure where to start deploying your own real-time communications system, please check out the RTC Quick Start Guide. You can read it online or download the PDF e-book.

Is your community SIP and XMPP enabled?

The Debian community has a federated SIP service, supporting standard SIP and WebRTC at for all Debian Developers. XMPP support was tested at DebConf15 and will be officially announced very soon now.

A similar service has been developed for the Fedora community and it is under evaluation at

Would you like to extend this concept to other free software and non-profit communities that you are involved in? If so, please feel free to contact me personally for advice about how you can replicate these successful initiatives. If your community has a Drupal web site, then you can install everything using packages and the DruCall module.

Comment and discuss

Please join the Free-RTC mailing list to discuss or comment

Round tables: “Open Source and Software Patent Non-Aggression, European Context”, Warsaw & Berlin, October 2015

Creative Destruction & Me » FLOSS | 08:51, Monday, 21 September 2015

Successful collaboration, Open Source license compliance and innovation management go hand-in-hand for large and small innovators. FSFE and Open Invention Network, with the participation of the Legal Network and the Asian Legal Network, are inviting to round table events with presentations and panel discussion of industry and community speakers, titled

Open Source and Software Patent Non-Aggression, European Context.

The events will be held in Berlin on 21 October and in Warsaw on 22 October. Attendance is limited – please confirm your attendance before October 15 to Nicola Feltrin of FSFE.


Filed under: CreativeDestruction, English, FLOSS, KDE, Linux, Open Invention Network, OSS, Qt Tagged: Creative Destruction, defensive publications, FLOSS, free software communities, free software foundation europe, KDE, Linux, OIN, software patents

Saturday, 19 September 2015

German Government wants authorities to advertise PDFreaders

Max's weblog » English | 17:27, Saturday, 19 September 2015

pdfreaders-logoShould authorities be allowed to make advertisement for only one company and ignore all the others? Many people strongly disagree, among them myself, the Free Software Foundation Europe (FSFE) and also the CIO of the Federal Republic of Germany, the IT commissioner of the German Government.

The whole story began with something we all had to read sometimes, at least subconsciously, on a website providing PDF documents: „To open the PDF files please download Adobe Acrobat Reader.“. Such notices are unnecessary advertisement for a proprietary (non-free) product — there are dozens of software applications which can do the same or even more, many of them Free Software. Because of that the FSFE started a campaign called „PDFreaders“ to make this deficiency public, and contact administrations and companies with thousands of letters and emails.

One big success of this campaign in Germany is PDFreaders being mentioned in the official current Migration Guide of Germany’s Chief Information Officer. This document explains some critical points of IT in administrations and companies and evaluates different software. Under point 4.3.7 „PDF readers and authoring“ the guide compares different PDF applications and also takes Free Software readers like Evince into account:

Alternative OSS-Produkte zur Darstellung von PDF-Dokumenten gibt es einige, u.a. Sumatra PDF und Okular; die FSFE pflegt eine Liste mit freien PDF-Betrachtern [242].

There are a lot of alternative OSS products for displaying PDF documents, i.a. Sumatra PDF and Okular; the FSFE maintains a list of free PDF readers [242].

This „list of PDF readers“ is one of the cores of the PDFreaders campaign. Instead of just complaining the unjust situation the FSFE provides information on various applications which are all Free Software and which fit everybody’s needs, may it be performance, size, the amount of functions or the used operating system. And if authorities (or companies and individuals) want to tell their website’s visitors how to open PDF documents, the CIO has a strong suggestion:

Werden PDF-Dokumente öffentlich bereitgestellt, sollten Behörden fairerweise zu deren Betrachtung nicht mehr ausschließlich den Adobe Acrobat Reader empfehlen, sondern beispielsweise die von der FSFE bereitgestellten HTML-Bausteine [250] zum Download alternativer PDF-Betrachter in ihre Seiten aufnehmen.

If PDF documents are provided publicly authorities shall no longer only recommend Adobe Acrobat Reader for displaying them, but for example use the HTML templates provided by the FSFE [250] on their websites for downloading alternative PDF readers.

Besides mentioning the broad PDF capabilities of LibreOffice, the guide also evaluates the current situation with editing PDF documents instead of only reading them, a function which some authorities seem to need for their services. According to the CIO the alternative Free Software solutions cannot provide the same functionalities as proprietary and expensive applications. Instead of just accepting the situation, the Migration Guide asks for more initiative of officials:

Hier wäre ein behördliches Engagement zur diesbezüglichen Weiterentwicklung vorhandener OSS-Alternativen sinnvoll, um nicht in ungewollter Abhängigkeit von einzelnen Anbietern proprietärer Produkte zu verharren.

In this case more administrative engagement to extend existing OSS alternatives would make sense in order to avoid staying in unwanted dependency from single vendors of proprietary products.

So yes please, German authorities, listen to your CIO: Use and help improving Free Software to keep yourself and your citizens independent, avoid vendor-lockin, save money and open a fair market for all competitors in the race for the best PDF readers.

Friday, 18 September 2015

Upholding Freedoms of Communication

Paul Boddie's Free Software-related blog » English | 17:27, Friday, 18 September 2015

Recently, I was alerted to a blog post by Bradley Kuhn of the Software Freedom Conservancy where he describes the way in which proprietary e-mail infrastructure not only withholds the freedoms end-users should expect from their software, but where the operators of such infrastructure also stifle the free exchange of information by refusing to deliver mail, apparently denying delivery according to seemingly arbitrary criteria (if we are to be charitable about why, for example, Microsoft might block the mails sent by an organisation that safeguards the rights of Free Software users and developers).

The article acknowledges that preventing spam and antisocial activities is difficult, but it rightfully raises the possibility that if things continue in the same way they have been going, one day the only way to use e-mail will be through subscribing to an opaque service that, for all we as customers would know, censors our messages, gathers and exploits personal information about us, and prevents people from contacting each other based on the whims of the operator and whatever agenda they might have.

Solved and Boring

Sadly, things like e-mail don’t seem to get the glory amongst software and solutions developers that other methods of online communication have experienced in recent years: apparently, it’s all been about real-time chat, instant messaging, and social networking. I had a conversation a few years ago on the topic of social networking with an agreeable fellow who was developing a solution, but I rather felt that when I mentioned e-mail as the original social networking service and suggested that it could be tamed for a decent “social” experience, he regarded me as being somewhat insane to even consider e-mail for anything serious any more.

But anyway, e-mail is one of those things that has been regarded as a “solved problem“, meaning that the bulk of the work needed to support it is regarded as having been done a long time ago, and the work needed to keep it usable and up-to-date with evolving standards is probably now regarded as a chore (if anyone really thinks about that work at all, because some clearly do not). Instead of being an exciting thing bringing us new capabilities, it is at best seen as a demanding legacy that takes time away from other, more rewarding challenges. Still, we tell ourselves, there are solid Free Software solutions available to provide e-mail infrastructure, and so the need is addressed, a box is ticked, and we have nothing to worry about.

Getting it Right

Now, mail infrastructure can be an intimidating thing. As people will undoubtedly tell you, you don’t want to be putting a mail server straight onto the Internet unless you know what you are doing. And so begins the exercise of discovering what you should be doing, which either entails reading up about the matter or just paying someone else to do the thinking on your behalf, which in the latter case either takes the form of getting some outside expertise to get you set up or instead just paying someone else to provide you with a “mail solution”. In this day and age, that mail solution is quite likely to be a service – not some software that you have to install somewhere – and with the convenience of not having to manage anything, you rely completely on your service provider to do the right thing.

So to keep the software under your own control, as Bradley points out, Free Software needs to be well-documented and easy to adopt in a foolproof way. One might regard “foolproof” as an unkind term, but nobody should need to be an expert in everything, and everybody should be able to start out on the path to understanding without being flamed for being ignorant of the technical details. Indeed, I would go further and say that Free Software should lend itself to secure-by-default deployment which should hold up when integrating different components, as opposed to finger-pointing and admonishments when people try and figure out the best practices themselves. It is not enough to point people at “how to” documents and tell them to not only master a particular domain but also to master the nuances of specific technologies to which they have only just been introduced.

Thankfully, some people understand. The FreedomBox initiative is ostensibly about letting people host their own network services at home, which one might think is mostly a matter of making a computer small and efficient enough to sit around switched on all the time, combined with finding ways to let people operate such services behind potentially restrictive ISP barriers. But the work required to achieve this in a defensible and sustainable way involves providing software that is easily and correctly configured and which works properly from the moment the system is deployed. It is no big surprise that such work is being done in close association with Debian.

Signs of the Times

With regard to software that end-users see, the situation could be a lot worse. KDE’s Kontact and KMail have kept up a reasonably good experience over the years, despite signs of neglect and some fairly incoherent aspects of the user interface (at least as I see it on Debian); I guess Evolution is still out there and still gets some development attention, as is presumably the case with numerous other, less well-known mail programs; Thunderbird is still around despite signs that Mozilla presumably thought that people should have been using webmail solutions instead.

Indeed, the position of Mozilla’s leadership on Thunderbird says a lot about the times we live in. Web and mobile things – particularly mobile things – are the new cool, and if people aren’t excited by e-mail and don’t see the value in it, then developers don’t see the value in developing solutions for it, either. But sometimes those trying to focus on current fashions fail to see the value in the unfashionable, and a backlash ensued: after all, what would people end up using at work and in “the enterprise” if Thunderbird were no longer properly supported? At the same time, those relying on Thunderbird’s viability, particularly those supplying it for use in organisations, were perhaps not quite as forthcoming with support for the project as they could have been.

Ultimately, Thunderbird has kept going, which is just as well given that the Free Software cross-platform alternatives are not always obvious or necessarily as well-maintained as they could be. Again, a lesson was given (if not necessarily learned) about how neglect of one kind of Free Software can endanger the viability of Free Software in an entire area of activity.

Webmail is perhaps a slightly happier story in some ways. Roundcube remains a viable and popular Web-hosted mail client, and the project is pursuing an initiative called Roundcube Next that seeks to refactor the code in order to better support new interfaces and changing user expectations. Mailpile, although not a traditional webmail client – being more a personal mail client that happens to be delivered through a Web interface – continues to be developed at a steady pace by some very committed developers. And long-established solutions like SquirrelMail and Horde IMP still keep doing good service in many places.

Attitude Adjustment

In his article, Bradley points out that as people forsake Free Software solutions for their e-mail needs, whether deciding to use an opaque and proprietary webmail service for personal mail, or whether deciding that their organisation’s mail can entirely be delegated to a service provider, it becomes more difficult to make the case for Free Software. It may be convenient to “just get a Gmail account” and if your budget (of time and/or money) doesn’t stretch to using a provider that may be friendlier towards things like freedom and interoperability, you cannot really be blamed for taking the easiest path. But otherwise, people should be wary of what they are feeding with their reliance on such services.

And those advocating such services for others should be aware that the damage they are causing extends far beyond the impact on competing solutions. Even when everybody was told that there is a systematic programme of spying on individuals, that industrial and competitive espionage is being performed to benefit the industries of certain nations, and that sensitive data could end up on a foreign server being mined by random governmental and corporate agencies, decision-makers will gladly exhibit symptoms of denial dressed up in a theatrical display of level-headedness: making a point of showing that they are not at all bothered by such stories, which they doubt are true anyway, and will with proud ignorance more or less say so. At risk are the details of other people’s lives.

Indicating that privacy, control and sustainability are crucial to any organisation will, in the face of such fact-denial, indeed invite notions that anyone bringing such topics up is one of the “random crazy people” for doing so. And by establishing such a climate of denial and marginalisation, the forces that would control our communications are able to control the debate about such matters, belittling concerns and appealing to the myth of the benign corporation that wants nothing in return for its “free” or “great value” products and who would never do anything to hurt its customers.

We cannot at a stroke solve such matters of ignorance, but we can make it easier for people to do the right thing, and to make it more obvious and blatant when people have chosen to do the wrong thing in the face of more conveniently and appropriately doing the right thing. We can evolve our own attitudes more easily, making Free Software easier to deploy and use, and in the case of e-mail not perpetuate the myth that nothing more needs to be done.

We will always have work to do to keep our communications free and unimpeded, but the investment we need to make is insignificant in comparison to the value to society of the result.

Please welcome Matthias Kirschner, FSFE’s new President

Karsten on Free Software | 07:00, Friday, 18 September 2015

On Thursday, FSFE’s General Assembly has elected Matthias Kirschner as the organisation’s new president. Having worked closely with him for a decade, I’m hugely happy to see him step up to this role. Together with our new vice president Alessandro Rubini and executive director Jonas Öberg, we’ve assembled a team of exactly the right people to lead FSFE. I’m happy to hand over the reins of the organisation to them.

Matthias has filled a number of roles in FSFE. Starting as the organisation’s first intern, he was in charge of building the Fellowship program, has coordinated the German team, and has been responsible for FSFE’s policy work in Germany. As FSFE’s vice president for the past two years, he has made a huge contribution to setting the organisation on its present successful course.

The many people who make up FSFE put huge amounts of energy into driving software freedom forward. The challenges for Free Software have changed a lot over the past decade. We constantly need to think about how to maintain and defend our autonomy and agency in an age when governments and corporations are prying into every detail of our lives, every day.

In the face of these challenges, I’m proud of what we’ve achieved these past six years. FSFE has grown into a strong advocate for user’s rights, and an important voice in the debate around privacy and autonomy. FSFE is a rare beast in the NGO landscape, combining a large community of local activists and supporters with professional policy work at the EU and national levels. Getting these different lines of work to support each other is something that requires constant attention; but it’s also very rewarding. I know that making community involvement more effective is high on the agenda of Jonas and Matthias.

FSFE is a great organisation, and it’s in great shape. This is an excellent point to hand over the wheel to Matthias and Jonas, who are a supremely capable leadership team. I will give them my full support as a member of FSFE’s General Assembly, and look forward to seeing them help FSFE grow further.

While this means a change of roles for me, it’s not goodbye by any measure. My own next step is to join Siemens Corporate Technology as a consultant, starting September 21. My role will be to support the company in the use of Free Software, and help strengthen Siemens’ integration with the global Free Software community. Of course, I’ll also continue my volunteer engagement with FSFE as a member of the General Assembly.

I’ll see you around!

Wednesday, 16 September 2015

Randa Meetings 2015 are History – But …

Mario Fux | 19:30, Wednesday, 16 September 2015

I’m exhausted and tired, but it was great and a lot was achieved. And as people just start to report about it and publish blog posts it was decided that we prolong the fundraiser for another two weeks. Thus it will officially end on the 30th of September 2015. The reason for this prolongation is the shaky internet connection we had in Randa during last week. Most of the people will report about what they did and achieved in the next days.

And if you are interested you can still checkout what was planned for the Meetings in the middle of the Swiss Alps. And there are some notes about the achievements too. So don’t stop to support this great way of bringing you the software and freedom you love.

flattr this!

Tuesday, 15 September 2015

Update, conference, hackfest, etc.

agger's Free Software blog | 18:50, Tuesday, 15 September 2015

I’ve been far too busy to write much here about my activities. What have I been working on?

Free software. And permaculture and other things, but mainly free software. The last weeks, we’ve been working on organizing this year’s LibreOffice conference in Aarhus. As part of that, we’re organizing Denmark’s first LibreOffice hackfest ever in Open Space Aarhus, our local hackerspace.

To quote what I wrote earlier today:

At the time of writing, there’s 42 registered attendees.

And the good news is: Everyone is welcome!

The event will focus on “C++11 in LibreOffice” and on bug triaging and bibisecting. There’s going to be drinks, snacks and dinner available.

  • The event is taking place at OSAA on September 24, starting at 17:30 hours.
  • Just before the event starts, the Aarhus C++ User’s Group will have a brief meeting in the space.
  • There will be non-alcoholic drinks,  beer and snacks
  • Concurrently, there will be a party for the non-hacking LibreOffice community in the Nygaard building just across the street, in the University’s Department of Computer Science.
  • Dinner will happen at 19:00 hours in the Nygaard building, for hackers and non-hackers alike. People will be on hand to show you the way.
  • Please register for dinner so we can order the food for you!
  • After dinner, hacking and socializing will continue until around midnight.


  • 17:30 People arrive
  • 17:45 Welcome by a member of the hacker space board, introduction to the evening’s themes
  • 18:00 Hacking and socializing
  • 19:30 Dinner
  • 20:00-00:00 Hacking and socializing

Times are approximate.

Do seize the opportunity to work with the hackers from one of the world’s largest FOSS projects!

The hackfest is organized as part of this year’s LibreOffice Conference - and I’m happy that so many conference participants will be coming to meet our vibrant local hackerspace community!

If you’re anywhere near that day: Do come!

Monday, 14 September 2015

Fellowship Interview with Nico Rikken

FSFE Fellowship Interviews | 15:27, Monday, 14 September 2015

Nico Rikken

Nico Rikken

Nico Rikken is a Fellow of the FSFE from The Netherlands with a background in electrical engineering and interests in open hardware, fabrication, digital preservation, photography and education policy, amongst other things.

Paul Boddie: It seems that we both read each other’s blogs and write about similar topics, particularly things related to open hardware, and it looks like we both follow some of the same projects. For others interested in open hardware and, more generally, hardware that supports Free Software, which projects would you suggest they keep an eye on? Which ones are perhaps the most interesting or exciting ones?

Nico Rikken: There is a strong synergy between free hardware designs and free software, as free software allows modifications corresponding to changes in free hardware designs, and free hardware designs provide the desired transparency for free software to be modified to run. And above all, the freer the stack of hardware and software, the better your freedoms are respected, as the ‘respects your freedom’ certification by the FSF recognizes. The amount of free hardware designs available is actually immense, covering many different applications. For my personal interests I’ve seen energy monitors (OpenEnergyMonitor), attempts for solar power inverters (Open Source Solar Inverter, Open Source Ecology), 3D Printers (Aleph Objects and RepRap), a video camera (Apertus), VJ tools (M-Labs), and an OpenPGP token with true random number generator (NeuG). But these projects work on task-specific hardware and software, and can remain in operation unchanged for many years.

The next frontier in free hardware development seems to me to be twofold, to develop free processor designs like lowRISC, and a modular free hardware design for generic computing like EOMA-68. In recent years there have been noteworthy projects like Novena, and the OLinuXino which provide a free hardware design solution but fail to provide free firmware or a modular approach to hardware. In that regard these projects, including the recent Librem laptop, are just wasted effort. These projects certainly provide much needed additional freedoms but lack an outlook towards the future for further improving performance and freedom. As microchips and processors in particular are only available for a limited duration before the next model comes into production, hardware designs and the corresponding firmware will have be updated continuously. Free processor designs will allow control on the pinout and feature set of the processors, avoiding unnecessary design revisions at the lowest level. A modular hardware structure will avoid having to modify and produce all components each iteration and allows higher production counts making production more viable. So taking this into account, I’ve only observed two projects which are important for the long-term development of free hardware designs of generic computing platforms: EOMA-68 and lowRISC. Of course I’m very interested in finding out about other efforts, as in the distributed community it is hard to know everything that is going on.

Paul Boddie: Your background appears to be much more hardware-oriented than mine, meaning that your perspective on hardware is perhaps quite different from mine, too. You have written that engineering students need to be taught Free Software. Did you establish your interest in Free Software as a consequence of your electrical engineering education, or did it happen over time through a general exposure to technology and the benefits of Free Software?

Nico Rikken: There has been quite some synergy between my formal education and my own hacker attitude. As long as I can remember I’ve been creative with technology, spanning hardware (wood, paper, fabric), electronics, and software. Probably because my dad is a power systems engineer and there was plenty of hardware and tools around in my youth. Part of the creative attitude is figuring out how to achieve a goal, figuring out how stuff works, and using readily available products and methods to speed up the process. When you are creative with digital technology, free software and free hardware designs are like oxygen. Quite notable is the fact that we had a presentation on the Creative Commons licenses in primary school by some expert, although I only recognized the importance of that moment many years later, after I had become aware of free software.

My technical development accelerated when I started my high school education. It offered the theoretical and practical education including the labs and people. In the years in high school a friend and I worked alongside the technical assistants of the school daily to help other students with their physics experiments and do our own in the process. But on the software side I did get the informatics education of the workings of computers, the MS Office suite, SQL and basic web development, but was never taught about free software. I had a friend whose dad was a electronics engineer and they used GNU/Linux at home. He showed it to me briefly but I only considered the look of the desktop, even though he tried to communicate the difference in the underlying technology. All this time I was a MS Windows user, using any software as long as it satisfied my feature requirements and was free of cost.

It wasn’t until I was at university for my electrical engineering education I became aware of GNU/Linux as relevant. It was used in the embedded systems department and was more visible, and some students were experimenting with using it. When I started investigating what Linux actually was, I was struck by the technical superiority and the overall better user interface. I started dual-booting GNU/Linux Mint and was pleased with it. Switching between GNU/Linux Mint and MS Windows daily did introduce some issues, so I was in need of a solution. A friend at the time, who was quite involved in the Dutch hacking community, was using Ubuntu as his daily driver. He convinced me to switch to Ubuntu and ditch MS Windows and was a helping hand in getting across all the tiny problems. From that moment on I’ve only used a Windows VM to do some magazine design work in Adobe InDesign as porting the template to Scribus wasn’t worth the effort.

But more importantly that friend, being a hacker, briefly introduced me to the concept of free software and why it was relevant. It didn’t take long before I found Stallman speeches and became aware of the vastness of the free software community. That was the moment I realized how much I had been restricted in the past, and how my own creative work was taken away from me in proprietary data formats. I had falsely assumed that freeware was the equivalent of sharing hardware plans, because that followed from how little consideration I had given to accepting software licenses or considering alternatives because of the license. Having become aware of free software changed my world view, reinforcing itself with every issue that arose. I unwillingly accepted the fact that I needed proprietary software to finish my studies, and sticking to free software certainly brought inconveniences. I have two illustrative examples from this struggle. I failed an exam partly because I had missed out on about half the formulas during the course revision, as LibreOffice wasn’t able to parse the PowerPoint file correctly. Also I wasn’t allowed to use an alternative to Matlab like Scilab as a numerical computation suite as the examiners during the test weren’t instructed about other software tools. In retrospect I believe my education would have been better if I was introduced to the free software and the community more explicitly.

Paul Boddie: Those of us with a software background sometimes look at electrical and hardware engineers and feel that we can perhaps get away with faults in our work that people making physical infrastructure cannot. At the same time, efforts to make software engineering a proper engineering discipline have been decades-long struggles, and now we see some of the consequences in terms of reliability, privacy and security. What do you think software and hardware practitioners can learn from each other? Has any progress in the software domain, or perhaps the broader adoption of technology in society, brought new lessons for those in the hardware domain?

Nico Rikken: Software, especially the software running on a general purpose processor, can be changed more easily. This especially holds true regarding scale. I might as easily modify the hardware of my computer as I might switch my software, but hardware changes don’t really scale. Although my view is limited, I believe hardware design can learn from software by having a more rapid and distributed development cycle, relying on common building blocks (like libraries) as much as possible, and achieving automated tests based on specifications. From a development standpoint this requires diff-based hardware development with automated testing simulations. From a production standpoint this requires small batches to be produced cost-effectively for test runs, and generic testing connectivity on the boards itself. This stimulates the use of common components to avoid forced redesign or high component sourcing costs. Or to put the latter statement differently: I believe hardware development can learn from software development that a certain microchip can be good enough and it is worthwhile to have fewer models covering a similar feature set, more like the UNIX Philosophy. The 741 operational amplifier is a great example of such a default building block.

I don’t see what software can learn from electronics development that much. I however do see points of improvement based on industrial design principles. This has got to do with the way in which a product is meant to target a large audience as a single design is produced numerous times. I personally view the principles for good design by Dieter Rams to represent the pinnacle of industrial design. It recognizes the way in which a product is meant to target a wide audience, and improve their lives. I consider it to be analogous to the UNIX Philosophy, but I especially believe that user interfaces should be developed with these principles in mind. Too often interfaces seem to be an afterthought, or the levels of abstraction aren’t equivalent throughout the program. I recognise there are projects highlighting the importance of usability like GNOME, elementary OS, and LibreOffice. However too often I encounter user interfaces I consider overly complex and badly structured.

Paul Boddie: In your article about smart electrical grids you talk about fifty year timescales and planning for the longer term. And yet at the same time, with many aspects of our lives changing so fast, we now have the risk that our own devices and data might become ephemeral: that we might be throwing devices away and losing data to obsolescence. How do you think anyone can sensibly make the case for more sustainable evolution in technology when it might mean not being able to sell a more exciting product to people who just want newer and better things? And can we learn things from other industries about looking after our data and the ways in which we access it?

Nico Rikken: When considering the power distribution infrastructure, it is highly stable with hardly any moving parts, and a minimal level of wear. The systems are generally over-dimensioned, but this initial investment proves beneficial in the long run. This is very different to a computer which is nearly irrelevant within five years as a result of an evolving need. Regarding the sustainability of our technology, I’d again look at industrial design. Mark Adams, the CEO of the company Vitsoe, based around designs by Dieter Rams, has given me great insights in this regard. He considers recycling a defeat, because that means a product wasn’t suitable for reuse. This originates from the original ethos of the company, requiring a mutual commitment between company and user to allow the company to sell fewer products to more people. Taking this coherent point of view, we have to make hardware modular and easy to repair or repurpose. I think we are heading in the wrong direction as a result of miniaturization, especially if we consider the downward trend in repairability scores by iFixit.

I guess that the other way of going about this is the way 3D printing and IKEA are taking on the issue of sustainability. 3D desktop printing allows a filling factor to be defined, to reduce the amount of material used. Of course this reduces the physical strength, but this allows for material usage optimization. This is why 3D printed cars can be strong, light, and low on resources. And a plain 3D print can easily be recycled by shredding and melting, closing the material loop and only requiring tools and energy. IKEA offers modular furniture enabling reuse, but from experience I can say that it certainly shows if you’ve moved the furniture a couple of times. But the counterargument is that the production process is continuously being optimized to be low on resources. IKEA’s BESTÅ seems to be the latest and greatest on this issue, being highly modular and being made of hybrid materials of particleboard, fiberboard, honeycomb structured recycled paper filling, a foil wrap and tiny plastic shelf supports. It is optimized for recycling at the cost of reusablity, but I guess that better suits the way in which the majority buys and deals with furniture.

Taking this argument of sustainability towards electronics, being able to freely replace software is a prerequisite for making electronics long-lasting. This has bugged the Fairphone, despite best intentions. We will have to protest anti-features as consumers, demanding formal legislation to protect our rights and the well-being of our society. Ideally we would go so far as to declare all patents and copyright regarding interfaces unlawful, to enable use and (re)implementation of such interfaces even if it wasn’t part of a formal standardization effort. Also the Phonebloks concept is great in that it allows products of separate lifetimes to be combined, and components to be exchanged when requirements change, rather than having to change the complete device.

Considering the specific question around data, or information in general, I have come to find my digital notes to be far less fleeting than my paper-based notes, because I can keep them at hand all the time and because I can query them. Keeping your own archives available requires the use of common open standards, as I’ve come to find. Some of my earlier creative work is still locked in proprietary formats I have no way of opening. Some of my work in the Office suite I can only open with some loss of detail, although this gets better as projects like LibreOffice are improving the compatibility with proprietary formats. Thanks to libpwd, currently part of the Document Liberation Project, I was able to settle a dispute as secretary of a student climbing association, as the details of the agreement were only available in the WordPerfect format. In that regard I understand why printed documents are preferred for archival, and why most of the communication in the energy metering industry is still ASCII-based.

I do recognise the shallowness of the store of the digital commons, especially regarding websites. As a result of the vastness of the digital media we all consume, I guess it is hard to store all data, other than in a shared resource like the Wayback Machine, which fortunately offers a service for organizations. Also I recently discovered the MHTML format for storing a website in a single open format file. I would think the digital dark age is somewhat exaggerated in the fact that most produced information was discarded in history anyway. However for the information which is actually subject to archival, retrieving it from obsolete media or proprietary formats is a challenge which increases in complexity over time.

Paul Boddie: Another one of your hardware interests that appears to overlap with one of mine is that of photography, and you describe the industry standard of Micro Four Thirds for interchangeable lens cameras. Have you been able to take a closer look at Olympus’ Open Platform Camera initiative and the “OPC Hack & Make Project” or is it as unscrutinisable for you as it is for me?

Nico Rikken: Coming from an advanced compact camera, it took me quite a while to select the system camera I desired, because I was very aware I was going to buy into a lock-in. The amount of technical differences related to the various lens mounts was quite interesting and I came to the conclusion I wanted to have as many technical solutions available as possible when using manual lenses. In a way the best option for compatibility would have been the way the Ricoh GXR did it, by making the interface between body and lens purely electronic. In this way the optical requirements are separated and all components can be updated in case the interfacing standard changes.

Ultimately I believe the optical circuit will be kept to a minimum, because the digital information can more easily be manipulated, even after the fact. I realized this regarding the focusing, as now contrast-based focusing can be faster than phase-based focusing using, with the benefit of various focus-assisting technologies, which can then both be displayed on the rear display or via the viewfinder. A DSLR cannot offer the focus-assisting technologies via the viewfinder and the speed of the contrast-based focusing as required in live-view mode is significantly slower if only due to the different lens drive. More on the innovative side the Lytro is more than about correcting focusing afterwards, it opens up new ways for creative expression by changing perspective in a frozen shot. It is another innovative way of doing cinematography, like putting cameras on cranes, on drones, or the famous ‘bullet time’.

So regarding the Open Platform Camera initiative, based around the Olympus Air I believe it is a step forward regarding digital interoperability. Having an API available rather than image files opens up new capabilities, but I would think a physical connector with the option of a power adapter would have been better as it allows more direct control and can prevent having to recharge the batteries all the time. In that regard I believe enabling the API on current cameras would be more beneficial because I don’t believe the form-factor is actually holding people back from adopting it in their projects, considering the creations from the OPC Hack & Make Project Party in March. I assume the main drivers for the open approach are media attention, image building, testing potential niche markets, and probably selling more lenses. According to Wikipedia 11 companies have formally committed to Micro Four Thirds (MFT). Considering the available lenses even more companies offer products for the system. In that regard it seems to be the most universal lens mount standard available.

If I understand correctly Olympus is one of the mayor patent holders regarding digital photography, so I’m curious in what regard they exercise their patents by licensing. Regarding MFT as a standard, in terms of standardization it is said to be an extension of the original Four Thirds specification, which is said to be highly mobile, 100% digital, and an open standard, but apparently they have a different standard of openness as the same page mentions: “Details of the Four Thirds System standard are available to camera equipment manufacturers and industry organizations on an NDA basis. Full specifications cannot be provided to individuals or other educational/research entities.” Whether or not this includes license agreements regarding the standard we don’t know, but either way you’d have to start or join an imaging company to find out. Maybe the AXIOM Gamma camera will provide the needed information in the MFT module, although I doubt that will happen as a result of the NDA. Considering the number of companies working with MFT, I guess the standard is effectively open, other than for individuals or educational or research entities. Luckily work has been done to reverse engineer the electronic protocol by Lasse Beyer and Marcus Wolschon.

Paul Boddie: Do you think established manufacturers can be encouraged to produce truly open products and initiatives or do you think they are genuinely prevented from doing so by legitimate concerns about current or potential competitors and entities like patent trolls?

I hardly think so. They have a vested interest in keeping a strong grip on the market for targeting consumers, and losing the NDA means losing that grip. The Open Platform Camera Initiative by Olympus seems to be a step in the right direction, now lets hope they see the benefit of truly opening up the standard. That would benefit niche applications like astrophotography, book scanning, photographing old negatives, lomography or IR photography. All these types of photography have specific requirements for filters, sensors, focusing or software and opening up the specification would lower the barrier for adopting these features.

Paul Boddie: Could you imagine Micro Four Thirds becoming a genuine open standard?

Creating a motive for opening up the standard can be done using both a carrot and a stick. The carrot approach would be to complete the reverse engineering of the protocol and show what applications could benefit from an open standard. The stick approach would be to introduce a open pseudo-standard, regarding mechanical and electronic connectivity. Ideally such a standard would be between a mirrorless interchangeable-lens camera (MILC) and larger lenses, to allow multiple lenses to be connected with multiple bodies. As adapters start popping up for such a standard, the reputation of universal lens mount of MFT is threatened. I haven’t looked into the serial protocols of the various lens standards, so I’m not aware how easy it would be to pull off a universal lens mount. To me a sensor-based stabilized telescope would be a great test case for reverse engineering the standard and enhancing the camera body for the benefit of the user.

Paul Boddie: You have written about privacy and education a few times, occasionally combining the two topics. I was interested to see that you covered the Microsoft Outlook app credentials-leakage fiasco that also affected users at my former (university) workplace, and you also mentioned how people are being compelled to either use proprietary software and services or be excluded from activities as important as their own education. How do you see individuals being able to maintain their own privacy and choice in such adverse circumstances? As organisations seek to push their users into “the cloud” (sometimes in contravention of applicable laws), what strategies can you envisage for Free Software to pursue to counter such threats?

Nico Rikken: I assume these solutions are introduced with the best intentions, but they bring negative side-effects regarding user freedom. Accepting licences of other organizations than the educational organization should be considered unacceptable, even implicitly via a school policy. Likewise third parties having access to personal information including communication should be unacceptable. Luckily some universities are deploying their own solutions, for example universities in Nordhein-Westfalen and the University of Saskatchewan deploy solutions based on ownCloud, which is one of the ways external dependencies can be avoided. Schools should offer suitable tools with open interfacing standards for collaboration, preventing teams from adopting non-free solutions under social pressure. Using open standards and defaulting to free software is obvious. To avoid unnecessary usage information being generated, all information resources should be available for download, ideally exposing them via an API or web standard like RSS for inclusion in common client applications.

But this is wishful thinking, as I’m aware that current policies are weak, even those policies aren’t adhered to. Simply put, if you want to take a formal education you have to accept your freedoms are violated. The impact can be minimized by continuously protesting the use of non-free software service as a software substitute (SaaSS). I’ve come to find most of the times teachers don’t care as much about the software used, they just know the common proprietary solution. Having some friends to pass along information or convert documents can further reduce observability. Things get particularly difficult if no alternatives exist, or if non-free formats or templates are required.

An alternative way of getting educated is by taking part in Massive Open Online Courses (MOOCs). It seems to be the most promising way out, as content is offered according to open standards. The content availability and reusability is limited depending on the licenses, but the same holds for most educational institutions. Then there is the amount of monitoring involved, but most MOOCs allow pseudonymity unless you desire an official certificate. Assuming you use a VPN service or Tor even, this offers an unprecedented level of anonymity. Just compare this to the non-free software dominated IT systems of educational organizations, combined with the vast number of registered personal details and campus cameras. Whether or not MOOCs can replace a formal education in the coming years I don’t know, neither do I know how corporate organizations will judge MOOC-taught students.

Many thanks to Nico for answering our questions and for his continuing involvement in the Fellowship of the FSFE.

Friday, 11 September 2015

Payment for shadow work - fsfe | 07:27, Friday, 11 September 2015

A significant court ruling yesterday means that companies in the EU have to pay employees for time spent traveling to and from work.

If something seems too good to be true, it usually is. Once you get past the headline, you find that it only applies to workers who are going to appointments outside their workplace. Some companies have been careless about scheduling such appointments, sometimes leaving an employee more than 2 hours from his home at the end of the day and this new ruling may just force them to either try a little harder to schedule appointments with consideration for the employee or pay for the extra time if excessive travel sometimes occurs.

The ruling may be good news for IT workers, road congestion and the environment. With companies now becoming more efficient in managing travel time, they may well need to improve the software they use for scheduling the employee's time. With certain types of employee potentially having a shorter journey after the last appointment, this means less road congestion and less pollution.

Professionals who have to fly may also be taking note: if you work in Paris but have to fly to London one day for meetings, then your working time starts when you leave your home at 05:30 and finishes at 22:00 when you get home.

Shadow work comes in many forms

It is good to see society taking note of the hidden cost of shadow work.

What about all those other nasty and inefficient practices that take time but you don't get paid for them?

Have you ever wondered why some companies send bills and statements to you by email or post, but others expect you to waste 5 minutes of your time logging in to their web site to download the bill or statement? Should people be paid for this time too and would that force these companies to stop being so lazy?

Some companies have been trying to avoid doing the right thing by arguing that email is not secure, but this just doesn't add up: the original PGP MIME specs for email encryption were published through the IETF process in 1996. Barclaycard recently began offering to send credit card statements using encrypted PDF, proving that many of the excuses given by other companies are just that: excuses.

For developers

For people who are interested in improving the way their organization sends communications to customers, a popular choice is the Apache Camel integration framework. I've deployed Camel for customer trade notification and other purposes at several large and well known firms in financial services. It has a rich set of features for transforming messages, including a PGP data format for encryption and it is able to send messages out using a vast number of connectivity options, including many of the basic ones like SMTP, SMPP (for SMS), XMPP/Jabber and message queues.

Wednesday, 09 September 2015

Birthday party at Endocode in Berlin: 30 years Free Software Foundation

Creative Destruction & Me » FLOSS | 09:42, Wednesday, 09 September 2015

On 3 October 2015 Free Software Foundation Europe invites you for the 30th birthday party of the Free Software Foundation. While the main event will take place in Boston/USA, there will be several satellite birthday parties around the world to celebrate 30 years of empowering people to control technology, and one of them will be at Endocode in Berlin.

FSF 30 year birthday graphic

The Free Software Foundation was founded in 1985 and since then promotes computer users’ rights to use, study, copy, modify, and redistribute computer programs. It also helps to spread awareness of the ethical and political issues of freedom in the use of software.

(See the original invitation here…)

The birthday party in Berlin, organised by FSFE, will take place from 15:00 to 18:00 on 3 October 2015 at: Endocode AG, Brückenstraße 5A, 10179 Berlin.

To make sure that Endocode can provide enough birthday cake and coffee, please register before 15 September 2015 for the event by sending us an e-mail with the subject “FSF30″.

Join us on 3 October, celebrating 30 years of working for software freedom!

Filed under: CreativeDestruction, English, FLOSS, KDE, Linux, OSS, Qt Tagged: FLOSS, free software communities, free software foundation, free software foundation europe, freie software, FSFE, KDE

Cyberwar never ceases

Seravo | 06:18, Wednesday, 09 September 2015

A great deal of our work as Linux system administrators is related to security. Each server we maintain is bombarded on daily basis in a never ending cyberwar. Some of our customers (e.g. the website of the former Finnish Minister of Foreign Affairs, government websites, high profile political organisations and media sites) are obvious targets but statistics show that really, any device with a public IP address gets targeted by a steady stream of attacks.Cyber war map

Our global political system seems quite toothless on this matter; much due to the technical fact that it is next to impossible to track down the real source of a well executed cyber attack. In a case where the attacker’s IP address seems to originate from China, all we can really conclude is that China is probably the last place the attack came from.

Cyber attacks are also very cheap, and you most often can’t tell if your harasser is a government funded organisation or just a 13-year-old boy with a laptop in his parents’ basement. This is asymmetric warfare, because proper defence for cyber attacks is certainly not cheap or easy to establish.

Case example: attack on on September 3rd, 2015

Most of the time, cyber attacks are just a cost of doing business we need to withstand; but unfortunately sometimes effective attacks arise from the constant bombardment. Last week our own WordPress optimised cluster at got some extra heat from somebody trying to DDOS it.

A regular denial of service (DOS) attack is easy to tackle by automatically blocking the IP source of the attack, but a distributed DOS is much harder as the requests come from many different IP addresses, so our automatic rate limiters can’t kick in. This attacker was also smart enough to have bypassed our downstream caches by making unique requests every time with requests like these:

[03/Sep/2015:10:01:37 +0300] "GET /superdry4.php?tmpl/detail.tmpl?designer=Superdry%2Fshop.tmpl%3Fmysize%3DL%2CXL;designer=Superdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSuperdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DBazlen%20accessoires;art_id=531863;pg=2044 HTTP/1.1" 404
[03/Sep/2015:10:01:37 +0300] "GET /superdry4.php?tmpl/detail.tmpl?designer=Superdry%2Fshop.tmpl%3Fmysize%3DL%2CXL;designer=Superdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSuperdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSisterhood;art_id=106485;pg=81 HTTP/1.1" 404
[03/Sep/2015:10:01:37 +0300] "GET /superdry4.php?tmpl/detail.tmpl?designer=Superdry%2Fshop.tmpl%3Fmysize%3DL%2CXL;designer=Superdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSuperdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSpeedo;art_id=506899;pg=1719 HTTP/1.1" 404
[03/Sep/2015:10:01:37 +0300] "GET /superdry4.php?tmpl/detail.tmpl?designer=Superdry%2Fshop.tmpl%3Fmysize%3DL%2CXL;designer=Superdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSuperdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DNikita;art_id=480217;pg=82 HTTP/1.1" 404
[03/Sep/2015:10:01:37 +0300] "GET /superdry4.php?tmpl/detail.tmpl?designer=Superdry%2Fshop.tmpl%3Fmysize%3DL%2CXL;designer=Superdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DSuperdry%2Ftmpl%2Fshop.tmpl%3Fdesigner%3DPlomo%20o%20Plata;art_id=554941;pg=107 HTTP/1.1" 404

Using this method, the attacker managed to trigger a fairly heavy PHP heap to generate the WordPress 404 pages on our client’s website, thus maximising the impact of the attack.

Our first defensive action was to quickly take the targeted site offline, and then re-launch it on a separate, more powerful host that could better withstand the massively increased load. We also took other defensive steps, like trying to block some of the attacker’s IP ranges during the attack, which proved not to be as effective fighting against the DDOS.

Remedy in this case

When the dust settled and we had time to analyse the logs, we discovered the details of this attack. Using these discoveries, we made the decision to apply more of a white-listing approach to the HTTP requests our customers’ WordPress applications receive.

The most important step was to only pass through requests for PHP files that actually exist, and then deny all other *.php requests, returning code 404 error pages instead of bothering the WordPress instances at all. This change is very conservative and has already been applied to our development version, and will be deployed to all sites during the next round of system upgrades.

What’s next

In the longer run we’ll start to apply even more white-listing principles, which are possible thanks to the fact that our hosts only WordPress sites, and we can safely assume that the sites have certain structures.

We also plan to start caching 404 responses for up to 1 minute to prevent this WordPress 404 page generation attack, even though caching 404 pages is generally discouraged. We’ve also made it easy for our client to purge the downstream caches if the 1 minute cache is just too large a window for their operation. Related to this, we’re also developing automated scanning of our clients’ access logs that could detect anomalies like large spikes in traffic so that we can notice attacks before they show any visible effects.

We will be sharing all of our upcoming cyber-fighting techniques in future blog posts. Stay tuned!

Thursday, 03 September 2015

Randa and the Importance of Code Sprints for Open Source Hobbyists

Mario Fux | 23:05, Thursday, 03 September 2015

Guest blog by Holger Kaelberer (GCompris):

This year I will participate at the Randa Meetings for the second time. The last year was a great experience and I am really grateful that there was this opportunity to get in touch with the KDE community as a new developer of the recently incubated QtQuick port of the GCompris project.

As Randa is mostly financed by donations, it is obvious that this opens the door for students and hackers, that don’t have the financial means to join such an event. Working full time I can afford to pay my travel costs myself and personally I see the benefit of code sprint events first of all in the time, they give you for your project. Before talking a bit about what I plan to work on this year at Randa let me say some words on the importance that such code sprinting events have to open source hobbyist like me.

The Neglected Feature Branches

As probably many people involved in open source software development I work full time as a software developer and hack on open source software in my free time, because I dreamed the dream of making my hobby and my passion my job.

But — ay, there’s the rub!

When you come home after 8, 9, 10 hours of concentrated work on source code, maybe project controlled and sometimes under time pressure you can imagine that there is not much passion left for more hours doing the same activity. Of course, there are the weekends, that leave you more time for your own projects, unless you spend them with your friends or your family and your children, that you don’t see a lot during the week. So, this dream sometimes turns into frustration about not having enough time for what you really want to do. The concrete victims of the lack of time for your hobby are a bunch of uncompleted feature branches that have been started driven by a great idea, but slowly forgotten in the highs and lows of everyday life.

Now you can imagine that a whole week of time available exclusively for these feature branches brings a big smile to my face :-)

Now to the concrete feature branches I plan to work on this year in Randa:

Balancebox and Box2D in GCompris

The first one, balancebox, is about a new activity in GCompris I started last winter, that introduces a 2D physics engine in GCompris. The idea of the activity itself is simple and should probably placed in the “Fun” section of GCompris. The user is supposed to navigate a ball through a labyrinth of walls populated with holes and numbered buttons to a door by tilting his device. The numbered buttons have to be hit in the correct order to unlock the door. This obviously mainly targets mobile devices that provide sensoric information about device rotation (on desktop platforms tilting is simulated by using keypresses) and addresses fine motor skills as well as basic numeric counting capacities of the child.

After having experimented a bit with self written code for collision detection needed for collision dynamics between walls and the navigated ball, which becomes more difficult with complex, non-rectangular objects, I evaluated different libraries doing this work for me. I ended up with the QML bindings of the well known 2D physics engine Box2D by Erin Catto. As all activities in GCompris are developed only in QML and Javascript, those QML bindings integrate perfectly well with only a few wrapper elements. A bit of work had to be done to scale down the optimal dimensions of Box2D world objects (which are tuned to real world dimensions of 0.1 to 10 meters) to the smaller dimensions of my balancebox by calculating an appropriate scale-factor. But once done, the engine does a good job.

Once integrated, a 2D physics engine opens the door for a variety of other activities that cope with real world physics. As a next step I plan to use Box2D also for porting the Land safe activity from the Gtk+ version, where the player has to land a rocket smoothly on planet surfaces with different gravitational forces.

I am looking forward to discuss the possibility to use Step (or more precisely stepcore), KDE’s physics simulator, as an alternative physics engine with other members of the KDE Edu team in Randa.

Desktop-to-Mobile Notifications in KDE Connect

Besides working on GCompris, I’d like to benefit from my week at Randa by coming a bit closer to the KDE Connect code-base, that is still pretty new to me. Since using KDE’s Plasma on the desktop I discovered KDE Connect as a really useful tool in everyday work and use is mainly for file-transfer and notification synchronization.

A feature I missed in everyday use so far was the synchronization of notifications in the other direction: from desktop to mobile. Thus you can get notified e.g. of incoming messages of your jabber/IRC client when away from keyboard or whatever event that is not available on the mobile side. First I hacked around that by implementing a small wrapper that proxied all Notify calls on my desktop’s DBus org.freedesktop.Notification interface using a kdeconnect ping-message to my mobile device.

This was the beginning of another pair of feature branches, that integrated this feature directly into kdeconnect-kde core and kdeconnect-android, resp. The code is mostly working already, although there are some issues with specific Android-versions. As KDE Connect is one of the major topics this year in Randa, there will be the right place for resolving these missing bits and discuss some more questions regarding configuration of the notifications module directly with the KDE Connect developers there.

The Randa Meetings will start next week, enough time for you to help making it happen by donating to the still running fundraiser campaign:

A big “Thank you!” to all donors and the organizer(s) of this event!

flattr this!

Wednesday, 02 September 2015

Birthday party in Berlin: 30 years Free Software Foundation

I LOVE IT HERE » English | 11:52, Wednesday, 02 September 2015

On 3 October 2015 Free Software Foundation Europe invites you for the 30th birthday party of the Free Software Foundation. While the main event will take place in Boston/USA, there will be several satellite birthday parties around the world to celebrate 30 years of empowering people to control technology, and one of them will be in Berlin.

FSF 30 year birthday graphic

The Free Software Foundation was founded in 1985 and since then promotes computer users’ rights to use, study, copy, modify, and redistribute computer programs. It also helps to spread awareness of the ethical and political issues of freedom in the use of software.

The birthday party in Berlin, organised by FSFE, will take place from 15:00 to 18:00 on 3 October 2015 at: Endocode AG, Brueckenstraße 5A, 10179 Berlin.

To make sure that FSFE’s donor Endocode can provide enough birthday cake and coffee, please register before 15 September 2015 for the event by sending us an e-mail with the subject “FSF30″.

Join us on 3 October, celebrating 30 years of working for software freedom!

Tuesday, 01 September 2015

Dissolving our association

I LOVE IT HERE » English | 12:28, Tuesday, 01 September 2015

When a group of dedicated hackers founded Free Software Foundation Europe there was no usable legal basis for establishing a European wide legal entity, and it is still difficult to do so. The founders came up with the following approach: create a European “Hub” organisation as an e.V. in Germany as the central legal body, the core of the Free Software Foundation Europe, where the members should be formed by representatives of local FSFE Chapter’s registered in the different European countries. They started to implement this, first the “Hub” and then a German Chapter.

Two man and a child cleaing a carpet

Chapters were meant to be “modular, local legal bodies of the FSFE, formed by the members of the FSFE from that country and sometimes guest members from other countries. Their main function would be to receive deductible donations, where possible.” They should be “integrated throughout the FSFE with their statutes, giving the national teams of FSFE the freedom and autonomy to address the local issues in the way appropriate for the cultural and social identity in those countries.”

In the years afterwards it turned out that for a lot of countries this structure is a problem. There were no benefits from a local association and also often you did not need it to act as a country team for FSFE. There is also additional bureaucracy you have to take care of; filing reports to different authorities, and have certain laws which regulate how you can work together which might not fit the group’s needs.

By the end of 2014, the only other association beside “FSFE e.V.” was “FSFE Chapter Germany e.V.” The members of FSFE e.V. decided on November 9th, 2014, to dissolve the chapter to get rid of bureaucratic tasks and concentrate on our mission.

But dissolving an association is not as easy as you might think it would be. It involved the following steps:

  • discussions with the bank how to transfer the bank account to FSFE e.V.
  • informing the register court about us dissolving. They replied that we have to send that information signed by a notary. We did that. Then they told us, we have to clarify a part, namely that “one liquidator decided alone, several liquidators decide together”. Yes, I think that is obvious, but they wanted it in written form and signed by a notary. The notary also could not believe it, but again we did it.
  • We had to send a notice to an official announcement paper to inform the public that we will dissolve ourselves (yes, you have to pay a fee for that). The liquidation of Chapter Germany was announced on 5 May 2015 in the “Amtlicher Anzeiger” (PDF, page 16) with the date of 13 April 2015.

Now we have to wait until next year April to see if anyone thinks we still owe them money. After this time we can again go to a notary, and then finally close down FSFE Chapter Germany e.V.

Monday, 31 August 2015

Software-defined radio on GNU/Linux

the_unconventional's blog » English | 07:00, Monday, 31 August 2015

For a long time, I’ve been using my DVB tuner to watch TV with VLC on my computers. However, many of these DVB tuners are actually capable of doing much more than just TV. If you’re lucky, and you have a decent tuner, you can turn your little dongle into a full software-defined radio receiver.

By far the best chipset on the market is the Realtek RTL2832 with the R820T2 tuner. It works with a fully free GPL’d kernel module and it does not require any firmware to operate. (So yes, it works on Trisquel and Parabola as well.) There are quite a lot of dongles around with this chipset and this tuner, but such technical details are hardly ever advertised. Finding the “right” one will take some effort.

If you’re lucky enough to have an RTL2832, you’ll be able to use the RTL-SDR suite (GPLv2) and a bunch of other FOSS tools to do all kinds of cool and useful stuff.

There’s a lot of information about the abilities of these tuners on the RTL-SDR web site, although I do often find it to be very MS Windows-centric, and therefore useless to me. It’s actually quite unfortunate that a free software project like RTL-SDR so actively advertises and recommends proprietary software and operating systems in nearly all of their tutorials. And even if they do run something on GNU/Linux, there’s hardly ever a distinction being made between “free as in freedom” and “free as in coffee”.

Therefore, I wanted to create a list of things you can do with RTL-SDR on GNU/Linux, using only free and open source software.


Getting started

First off, you’ll have to unload the DVB kernel module, because it interferes with SDR capabilities:

sudo rmmod dvb_usb_rtl28xxu

If you’re running a very recent kernel, you may also have to load the new SDR module:

sudo modprobe rtl2832_sdr

So far, I haven’t yet found a way to easily re-enable DVB capabilities without manual intervention. Reloading the module right away will make it work again, but as soon as you’ve used SDR functionality, the chip wants to be power-cycled before it starts decoding DVB streams again (even a computer reboot won’t help).

So until I find a better way, unplug and re-plug the dongle if you want to watch TV again. And of course, udev will then also reload the dvb_usb_rtl28xxu module for you.

On some distributions, you may have to set up a udev rule as well. In that case, you’ll receive warnings when you start using SDR functions, such as:

Using device #0 Generic RTL2832U OEM
usb_open error -3
Please fix the device permissions, e.g. by installing the udev rules
file rtl-sdr.rules

In order to fix it, run sudo nano /etc/udev/rules.d/20-rtl-sdr.rules and add this line to it:

SUBSYSTEM=="usb", ATTRS{idVendor}=="0bda", ATTRS{idProduct}=="2838", GROUP="adm", MODE="0666", SYMLINK+="rtl_sdr"

And restart udev by running sudo systemctl restart udev or sudo service udev restart.


Software installation

For DVB reception, you probably already had a bunch of DVB tools installed. If not, here’s what I would recommend for that:

sudo apt-get install dvb-apps vlc vlc-plugin-zvbi w-scan

Please note that DVB-T (television reception) is outside the scope of this post.

For SDR, you’re going to need some other tools:

sudo apt-get install rtl-sdr sox

This will install the basis for your RTL-SDR device, but it’s really just a collection of backend, command-line tools. In order to really get going, we’re going to need some front-end software as well.



Gqrx SDR (GPLv3+) is a relatively easy front-end for the extremely powerful but also horribly complicated GNU Radio (GPLv3+). It’s available in most distributions’ repositories:

sudo apt-get install gqrx-sdr

A very helpful beginner guide for Gqrx can be found on its website. Therefore, I won’t go into full detail about how to use it.

Just to get you started: you can change the frequency by scrolling on the digits in the top-left part of the screen. Then you set the filter to Normal and the mode to Wide FM (stereo). Once you’ve done that, you click the Start/Stop button and hopefully you’ll be able to make out a radio broadcast.

After reading the guide, I already understood most functionality and I was able to tune in to a local FM station after disabling Hardware AGC and adjusting the antenna gain under Input controls. For me, 30dB was about right, but it will depend on many conditions. Just tune in to a known working frequency and adjust the gain until you receive a decent signal.

On Ubuntu 14.04 at least, Gqrx does not ship with a .desktop file, so there will be no easy way to start it from the GUI. Therefore, I made one myself. Create a file called ~/.local/share/applications/gqrx.desktop and add this to it:

[Desktop Entry]
Comment=Software defined radio receiver implemented using GNU Radio and the Qt GUI toolkit

Then download this icon and move it to ~/.local/share/icons/.



Whereas Gqrx can be used to receive FM radio, it cannot decode digital radio, such as DAB(+). There’s a lot of fluff about DAB+ and countries shutting down FM broadcasts in the (near) future, so DAB+ reception would be useful as well.

For that, we can use the SDR-J DAB Receiver (GPLv2+). Although it’s GPL licensed and written in C++, using Qt as its toolkit, the author only supplies binaries for proprietary operating systems. For GNU/Linux, it’s up to us to compile the source.

Information on the build system can be found here, but you can also download my binaries for amd64 for your convenience. (I recommend using the DAB Receiver and not the DAB Mini Receiver.)

I haven’t come around to making Debian packages yet (and I might never), so you’ll have to manually install some dependencies:

sudo apt-get install libqwt6 librtlsdr0 librtlsdr-dev

Now just click on the binary and run it. Depending on your country, you’ll have to find out which multiplex contains the most interesting channels (for the Netherlands, that would be 11C and 12C) and click the START button to scan for channels.

If all goes well, you’ll start seeing DAB+ channels and you can tune in by clicking on them once.

Once more, you may have to manually play with the antenna gain to get a working signal. SDR-J‘s gain control appears to be opposite to Gqrx‘s: a lower number means a stronger signal and vice versa. However, do not overmodulate the signal. Even though you may get a higher SNR by increasing your gain, it will most likely result in choppy audio and an excess of scanning. For me, gain levels between 10 and 12 work best, but it’s a big YMMV.


High quality FM

If DAB+ reception is poor, FM is likely the best way to use RTL-SDR for music playback. Of course, you can use Gqrx to listen to FM radio, but as you might have noticed, its sound quality isn’t all that great. After all, it wasn’t built with music playback in mind.

On GitHub, Joris van Rantwijk claims to have built a superior FM demodulator called SoftFM (GPLv2+). After testing, I have to say that this indeed seems to be the case.

You’ll have to compile it from source (it doesn’t require anything specific: just build-essential and cmake will do) or you can use my binary for amd64.

Once you’ve got it, cd to its directory and run ./softfm -f 103.8M, where 103.8M is just an example, of course.

By default, SoftFM uses automatic LNA gain levels from the tuner, which I’m not particularly fond of. Especially weak channels are often noisy that way, which can be avoided by forcing a higher gain level than the tuner thinks it should.

So for your convenience, I made a simple bash script that asks you for the channel you want to listen to and which LNA gain level you want to use. In case you enter an unsupported value, a list of supported values will be echoed. In case you haven’t got a clue, you can also use auto.


echo "Which channel (in MHz) do you want to listen to?"
read freq

echo "Which LNA gain level (in dB) do you want to use?"
read gain

./softfm -g `echo $gain` -f `echo $freq`M



RDS has been around for a long time. It’s primarily used for radio station and traffic information in FM reception. It would be nice if we could display this information on a terminal. For that, we can use redsea (ISC License) by Oona Räisänen.

You’ll have to compile it from source (it doesn’t require anything specific: just build-essential and perl will do) or you can use my binary for amd64.

Once you’ve got it, cd to its directory and run ./ 103.2M, where 103.2M is just an example, of course.

I did have to make one modification to the perl script though. The default one didn’t work for me. It would just create a file called fm which only kept on growing, while nothing would be displayed on the terminal. My initial thoughts were that the output pipe was wrong, but it seems that rtl_fm doesn’t like the -F and -M operators, and it also doesn’t seem to need them for normal operation either.

In order to fix the script, I had to change one block of code (lines 220-223) from

  = open $bitpipe, '-|', sprintf($rtl_fm_exe.' -f %.1f -M fm -l 0 '.
                   '-A std '.$ppm.' -F 9 -s %.1f | '.$rtl_redsea_exe,
                   $freq, FS) or die($!);


  = open $bitpipe, '-|', sprintf($rtl_fm_exe.' -f %.1f -l 0 '.
                   '-A std '.$ppm.' -s %.1f | '.$rtl_redsea_exe,
                   $freq, FS) or die($!);


Tracking airplanes

You can also use RTL-SDR to scan for airplanes with ADS-B using Dump1090 (2-clause BSD License). You’ll have to compile it from source (it requires build-essential, pkg-config and libusb-1.0-0-dev) or you can use my binary for amd64.

Once you’ve got it, cd to its directory and run ./dump1090 --interactive. You’ll start seeing some raw data.

This, of course, is really only interesting for people who know a lot about air traffic. I am not one of those people. Luckily, Dump1090 is also able to project this data on a map. You can do that by running ./dump1090 --interactive --net. Then you fire up a browser and go to http://localhost:8080.



Other than tracking planes with Dump1090, we can also read messages transmitted between airplanes and ground stations over the Aircraft Communications Addressing and Reporting System (ACARS).

In Europe, ACARS messages are sent on the 131.725MHz frequency, using amplitude modulation. If you fire up Gqrx and tune to that frequency, you’ll notice random beeps. In order to find out what they mean, you’ll have to get your hands on a decoder. For that, I’m using rtl_acars_ng (GPLv2+).

As with almost all the other tools I’ve described, you’ll have to compile it from source or you can use my binary for amd64.

Once you have it, cd to its directory and run ./rtl_acars_ng -f 131.725M.


Weather stations

A lot of your neighbours probably have those quirky little weather stations with wireless outdoor sensors. This data is usually broadcast at 433MHz, useable by anyone who is able to listen in to that frequency.

You’d tune in to 433MHz with Gqrx, set the mode to AM and the filter to Narrow, and you’d be able to hear some weird beeps every time the transmitter pings its data.

Of course, those beeps are rather useless unless they’re decoded into useable data. For that, we can use rtl_433 (GPLv2). As expected, you can either compile it from source yourself, or you can use my binary for amd64.


Alarm pagers: P2000

In the Netherlands, we use P2000 pagers for alarm messages used by fire fighters, ambulance personnel and policemen. Those messages are sent using Motorola’s FLEX protocol. For a long time, this protocol couldn’t be decoded on GNU/Linux, but recently support has been added to multimon-ng (GPLv2+). It hasn’t been upstreamed yet, so in order to decode P2000, you’ll have to use Craig Shelley’s fork (GPLv2+).

It’s getting boring now, but you can either compile it yourself or you can use my binary for amd64. Once you have it, open a terminal in that directory, and run the following command:

rtl_fm -f 169.65M -s 22050 -l 250 | ./multimon-ng -a FLEX -a SCOPE -t raw /dev/stdin

Thursday, 27 August 2015

Hardware Experiments with Fritzing

Paul Boddie's Free Software-related blog » English | 23:42, Thursday, 27 August 2015

One of my other interests, if you can even regard it as truly separate to my interests in Free Software and open hardware, involves the microcomputer systems of the 1980s that first introduced me to computing and probably launched me in the direction of my current career. There are many aspects of such systems that invite re-evaluation of their capabilities and limitations, leading to the consideration of improvements that could have been made at the time, as well as more radical enhancements that unashamedly employ technology that has only become available or affordable in recent years. Such “what if?” thought experiments and their hypothetical consequences are useful if we are to learn from the strategic mistakes once made by systems vendors, to have an informed perspective on current initiatives, and to properly appreciate computing history.

At the same time, people still enjoy actually using such systems today, writing new software and providing hardware that makes such continuing usage practical and sustainable. These computers and their peripherals are certainly “getting on”, and acquiring or rediscovering such old systems does not necessarily mean that you can plug them in and they still work as if they were new. Indeed, the lifetime of magnetic media and the devices that can read it, together with issues of physical decay in some components, mean that alternative mechanisms for loading and storing software have become attractive for some users, having been developed to complement or replace the cassette tape and floppy disk methods that those of us old enough to remember would have used “back in the day”.

My microcomputer of choice in the 1980s was the Acorn Electron – a cut-down, less expensive version of the BBC Microcomputer hardware platform – which supported only cassette storage in its unexpanded form. However, some expansion units added the disk interfaces present on the BBC Micro, while others added the ability to use ROM-based software. On the BBC Micro, one would plug ROM chips directly into sockets, and some expansion units for the Electron supported this method, too. The official Plus 1 expansion chose instead to support the more friendly expansion cartridge approach familiar to users of other computing and console systems, with ROM cartridges being the delivery method for games, applications and utilities in this form, providing nothing much more than a ROM chip and some logic inside a convenient-to-use cartridge.

The Motivation

A while ago, my brother, David, became interested in delivering software on cartridge for the Electron, and a certain amount of discussion led him to investigate various flash memory integrated circuits (ICs, chips), notably the AMD Am29F010 series. As technological progress continues, such devices provide a lot of storage in comparison to the ROM chips originally used with the Electron: the latter having only 16 kilobytes of capacity, whereas the Am29F010 variant chosen here has a capacity of 128 kilobytes. Meanwhile, others chose to look at EEPROM chips, notably the AT28C256 from Atmel.

Despite the manufacturing differences, both device types behave in a very similar way: a good idea for the manufacturers who could then sell products that would be compatible straight away with existing products and the mechanisms they use. In short, some kind of de-facto standard seems to apply to programming these devices, and so it should be possible to get something working with one and then switch to the other, especially if one kind becomes too difficult to obtain.

Now, some people realised that they could plug such devices into their microcomputers and program them “in place” using a clever hack where writes to the addresses that correspond to the memory provided by the EEPROM (or, indeed, flash memory device) in the computer’s normal memory map can be trivially translated into addresses that have significance to the EEPROM itself. But not routinely using such microcomputers myself, and wanting more flexibility in the programming of such devices, not to mention also avoiding the issue of getting software onto such computers so that it can be written to such non-volatile memory, it seemed like a natural course of action to try to do the programming with the help of some more modern conveniences.

And so I considered the idea of getting a microcontroller solution like the Arduino to do the programming work. Since an Arduino can be accessed over USB, a ROM image could be conveniently transferred from a modern computer and, with a suitable circuit wired up, programmed into the memory chip. ROM images can thus be obtained in the usual modern way – say, from the Internet – and then written straight to the memory chip via the Arduino, rather than having to be written first to some other medium and transferred through a more convoluted sequence of steps.


Being somewhat familiar with Arduino experimentation, the first exercise was to make the circuit that can be used to program the memory device. Here, the first challenge presented itself: the chip employs 17 address lines, 8 data lines, and 3 control lines. Meanwhile, the Arduino Duemilanove only provides 14 digital pins and 6 analogue pins, with 2 of the digital pins (0 and 1) being unusable if the Arduino is communicating with a host, and another (13) being connected to the LED and being seemingly untrustworthy. Even with the analogue pins in service as digital output pins, only 17 pins would be available for interfacing.

The pin requirements
Arduino Duemilanove Am29F010
11 digital pins (2-12) 17 address pins (A0-A16)
6 analogue pins (0-6) 8 data pins (DQ0-DQ7)
3 control pins (CE#, OE#, WE#)
17 total 28 total

So, a way of multiplexing the Arduino pins was required, where at one point in time the Arduino would be issuing signals for one purpose, these signals would then be “stored” somewhere, and then at another point in time the Arduino would be issuing signals for another purpose. Ultimately, these signals would be combined and presented to the memory device in a hopefully coherent fashion. We cannot really do this kind of multiplexing with the control signals because they typically need to be coordinated to act in a timing-sensitive fashion, so we would be concentrating on the other signals instead.

So which signals would be stored and issued later? Well, with as many address lines needing signals as there are available pins on the Arduino, it would make sense to “break up” this block of signals into two. So, when issuing an address to the memory device, we would ideally be issuing 17 bits of information all at the same time, but instead we take approximately half of the them (8 bits) and issue the necessary signals for storage somewhere. Then, we would issue the other half or so (8 bits) for storage. At this point, we need only a maximum of 8 signal lines to communicate information through this mechanism. (Don’t worry, I haven’t forgotten the other address bit! More on that in a moment!)

How would we store these signals? Fortunately, I had considered such matters before and had ordered some 74-series logic chips for general interfacing, including 74HC273 flip-flop ICs. These can be given 8 bits of information and will then, upon command, hold that information while other signals may be present on its input pins. If we take two of these chips and attach their input pins to those 8 Arduino pins we wish to use for communication, we can “program” each 74HC273 in turn – one with 8 bits of an address, the other with another 8 bits – and then the output pins will be presenting 16 bits of the address to the memory chip. At this point, those 8 Arduino pins could even be doing something else because the 74HC273 chips will be holding the signal values from an earlier point in time and won’t be affected by signals presented to their input pins.

Of all the non-control signals, with 16 signals out of the way, that leaves only 8 signals for the memory chip’s data lines and that other address signal to deal with. But since the Arduino pins used to send address signals are free once the addresses are sent, we can re-use those 8 pins for the data signals. So, with our signal storage mechanism, we get away with only using 8 Arduino pins to send 24 pieces of information! We can live with allocating that remaining address signal to a spare Arduino pin.

Address and data pins
Arduino Duemilanove 74HC273 Am29F010
8 input/output pins 8 output pins 8 address pins (A0-A7)
8 output pins 8 address pins (A8-A15)
8 data pins (DQ0-DQ7)
1 output pin 1 address pin (A16)
9 total 25 total

That now leaves us with the task of managing the 3 control signals for the memory chip – to make it “listen” to the things we are sending to it – but at the same time, we also need to consider the control lines for those flip-flop ICs. Since it turns out that we need 1 control signal for each of the 74HC273 chips, we therefore need to allocate 5 additional interfacing pins on the Arduino for sending control signals to the different chips.

The final sums
Arduino Duemilanove 74HC273 Am29F010
8 input/output pins 8 output pins 8 address pins (A0-A7)
8 output pins 8 address pins (A8-A15)
8 data pins (DQ0-DQ7)
1 output pin 1 address pin (A16)
3 output pins 3 control pins (CE#, OE#, WE#)
2 output pins 2 control pins (CP for both ICs)
14 total 28 total

In the end, we don’t even need all the available pins on the Arduino, but the three going spare wouldn’t be enough to save us from having to use the flip-flop ICs.

With this many pins in use, and the need to connect them together, there are going to be a lot of wires in use:

The breadboard circuit with the Arduino and ICs

The breadboard circuit with the Arduino and ICs

The result is somewhat overwhelming! Presented in a more transparent fashion, and with some jumper wires replaced with breadboard wires, it is slightly easier to follow:

An overview of the breadboard circuit

An overview of the breadboard circuit

The orange wires between the two chips on the right-hand breadboard indicate how the 8 Arduino pins are connected beyond the two flip-flop chips and directly to the flash memory chip, which would sit on the left-hand breadboard between the headers inserted into that breadboard (which weren’t used in the previous arrangement).

Making a Circuit Board

It should be pretty clear that while breadboarding can help a lot with prototyping, things can get messy very quickly with even moderately complicated circuits. And while I was prototyping this, I was running out of jumper wires that I needed for other things! Although this circuit is useful, I don’t want to have to commit my collection of components to keeping it available “just in case”, but at the same time I don’t want to have to wire it up when I do need it. The solution to this dilemma was obvious: I should make a “proper” printed circuit board (PCB) and free up all my jumper wires!

It is easy to be quickly overwhelmed when thinking about making circuit boards. Various people recommend various different tools for designing them, ranging from proprietary software that might be free-of-charge in certain forms but which imposes arbitrary limitations on designs (as well as curtailing your software freedoms) through to Free Software that people struggle to recommend because they have experienced stability or functionality deficiencies with it. And beyond the activity of designing boards, the act of getting them made is confused by the range of services in various different places with differing levels of service and quality, not to mention those people who advocate making boards at home using chemicals that are, shall we say, not always kind to the skin.

Fortunately, I had heard of an initiative called Fritzing some time ago, initially in connection with various interesting products being sold in an online store, but whose store then appeared to be offering a service – Fritzing Fab – to fabricate individual circuit boards. What isn’t clear, or wasn’t really clear to me straight away, was that Fritzing is also some Free Software that can be used to design circuit boards. Conveniently, it is also available as a Debian package.

The Fritzing software aims to make certain tasks easy that would perhaps otherwise require a degree of familiarity with the practice of making circuit boards. For instance, having decided that I wanted to interface my circuit to an Arduino as a shield which sits on top and connects directly to the connectors on the Arduino board, I can choose an Arduino shield PCB template in the Fritzing software and be sure that if I then choose to get the board made, the dimensions and placement of the various connections will all be correct. So for my purposes and with my level of experience, Fritzing seems like a reasonable choice for a first board design.

Replicating the Circuit

Fritzing probably gets a certain degree of disdain from experienced practitioners of electronic design because it seems to emphasise the breadboard paradigm, rather than insisting that a proper circuit diagram (or schematic) acts as the starting point. Here is what my circuit looks like in Fritzing:

The breadboard view of my circuit in Fritzing

The breadboard view of my circuit in Fritzing

You will undoubtedly observe that it isn’t much tidier than my real-life breadboard layout! Having dragged a component like the Arduino Uno (mostly compatible with the Duemilanove) onto the canvas along with various breadboards, and then having dragged various other components onto those breadboards, all that remains is that we wire them up like we managed to do in reality. Here, Fritzing helps out by highlighting connections between things, so that breadboard columns appear green as wires are connected to them, indicating that an electrical connection is made and applies to all points in that column on that half of the breadboard (the upper or lower half as seen in the above image). It even highlights things that are connected together according to the properties of the device, so that any attempt to modify to a connection that leads to one of the ground pins on the Arduino also highlights the other ground pins as the modification is being done.

I can certainly understand criticism of this visual paradigm. Before wiring up the real-life circuit, I typically write down which things will be connected to each other in a simple table like this:

Example connections
Arduino 74HC273 #1 74HC273 #2 Am29F010
A5 CE#
A4 OE#
A3 WE#
2 CP
3 CP
4 D3 D3 DQ3

If I were not concerned with prototyping with breadboards, I would aim to use such information directly and not try and figure out which size breadboard I might need (or how many!) and how to arrange the wires so that signals get where they need to be. When one runs out of points in a breadboard column and has to introduce “staging” breadboards (as shown above by the breadboard hosting only incoming and outgoing wires), it distracts from the essential simplicity of a circuit.

Anyway, once the circuit is defined, and here it really does help that upon clicking on a terminal/pin, the connected terminals or pins are highlighted, we can move on to the schematic view and try and produce something that makes a degree of sense. Here is what that should look like in Fritzing:

The schematic for the circuit in Fritzing

The schematic for the circuit in Fritzing

Now, the observant amongst you will notice that this doesn’t look very tidy at all. First of all, there are wires going directly between terminals without any respect for tidiness whatsoever. The more observant will notice that some of the wires end in the middle of nowhere, although on closer inspection they appear to be aimed at a pin of an IC but are shifted to the right on the diagram. I don’t know what causes this phenomenon, but it would seem that as far as the software is concerned, they are connected to the component. (I will come back to how components are defined and the pitfalls involved later on.)

Anyway, one might be tempted to skip over this view and try and start designing a PCB layout directly, but I found that it helped to try and tidy this up a bit. First of all, the effects of the breadboard paradigm tend to manifest themselves with connections that do not really reflect the logical relationships between components, so that an Arduino pin that feeds an input pin on both flip-flop ICs as well as a data pin on the flash memory IC may have its connectors represented by a wire first going from the Arduino to one of the flip-flop ICs, then to the other flip-flop IC, and finally to the flash memory IC in some kind of sequential wiring. Although electrically this is not incorrect, with a thought to the later track routing on a PCB, it may not be the best representation to help us think about such subsequent problems.

So, for my own sanity, I rearranged the connections to “fan out” from the Arduino as much as possible. This was at times a frustrating exercise, as those of you with experience with drawing applications might recognise: trying to persuade the software that you really did select a particular thing and not something else, and so on. Again, selecting the end of a connection causes some highlighting to occur, and the desired result is that selecting a terminal highlights the appropriate terminals on the various components and not the unrelated ones.

Sometimes that highlighting behaviour provides surprising and counter-intuitive results. Checking the breadboard layout tends to be useful because Fritzing occasionally thinks that a new connection between certain pins has been established, and it helpfully creates a “rats nest” connection on the breadboard layout without apparently saying anything. Such “rats nest” connections are logical connections that have not been “made real” by the use of a wire, and they feature heavily in the PCB view.

PCB Layout

For those of us with no experience of PCB layout who just admire the PCBs in everybody else’s products, the task of laying out the tracks so that they make electrical sense is a daunting one. Fritzing will provide a canvas containing a board and the chosen components, but it is up to you to combine them in a sensible way. Here, the circuit board actually corresponds to the Arduino in the breadboard and schematic views.

But slightly confusing as the depiction of the Arduino is in the breadboard view, the pertinent aspects of it are merely the connectors on that device, not the functionality of the device itself which we obviously aren’t intending to replicate. So, instead of the details of an actual Arduino or its functional equivalent, we instead merely see the connection points required by the Arduino. And by choosing a board template for an Arduino shield, those connection points should appear in the appropriate places, as well as the board itself having the appropriate size and shape to be an Arduino shield.

Here’s how the completed board looks:

The upper surface of the PCB design in Fritzing

The upper surface of the PCB design in Fritzing

Of course, I have spared you a lot of work by just showing the image above. In practice, the components whose outlines and connectors feature above need to be positioned in sensible places. Then, tracks need to be defined connecting the different connection points, with dotted “rats nest” lines directly joining logically-connected points needing to be replaced with physical wiring in the form of those tracks. And of course, tracks do not enjoy the same luxury as the wires in the other views, of being able to cross over each other indiscriminately: they must be explicitly routed to the other side of the board, either using the existing connectors or by employing vias.

The lower surface of the PCB design in Fritzing

The lower surface of the PCB design in Fritzing

Hopefully, you will get to the point where there are no more dotted lines and where, upon selecting a connection point, all the appropriate points light up, just as we saw when probing the details of the other layouts. To reassure myself that I probably had connected everything up correctly, I went through my table and inspected the pin-outs of the components and did a kind of virtual electrical test, just to make sure that I wasn’t completely fooling myself.

With all this done, there isn’t much more to do before building up enough courage to actually get a board made, but one important step that remains is to run the “design checks” via the menu to see if there is anything that would prevent the board from working correctly or from otherwise being made. It can be the case that tracks do cross – the maze of yellow and orange can be distracting – or that they are too close and might cause signals to go astray. Fortunately, the hours of planning paid off here and only minor adjustments needed to be done.

It should be noted that the exercise of routing the tracks is certainly not to be underestimated when there are as many connections as there are above. Although an auto-routing function is provided, it failed to suggest tracks for most of the required connections and produced some bizarre routing as well. But clinging onto the memory of a working circuit in real three-dimensional space, along with the hope that two sides of a circuit board are enough and that there is enough space on the board, can keep the dream of a working design alive!

The Components

I skipped over the matter of components earlier on, and I don’t really want to dwell on the matter too much now, either. But one challenge that surprised me given the selection of fancy components that can be dragged onto the canvas was the lack of a simple template for a 32-pin DIP (dual in-line package) socket for the Am29F010 chip. There were socket definitions of different sizes, but it wasn’t possible to adjust the number of pins.

Now, there is a parts editor in Fritzing, but I tend to run away from graphical interfaces where I suspect that the matter could be resolved in more efficient ways, and it seems like other people feel the same way. Alongside the logical definition of the component’s connectors, one also has to consider the physical characteristics such as where the connectors are and what special markings will be reproduced on the PCB’s silk-screen for the component.

After copying an existing component, ransacking the Fritzing settings files, editing various files including those telling Fritzing about my new parts, I achieved my modest goals. But I would regard this as perhaps the weakest part of the software. I didn’t resort to doing things the behind-the-scenes way immediately, but the copy-and-edit paradigm was incredibly frustrating and doesn’t seem to be readily documented in a way I could usefully follow. There is a Sparkfun tutorial which describes things at length, but one cannot help feeling that a lot of this should be easier, especially for very simple component changes like the one I needed.

The Result

With some confidence and only modest expectations of success, I elected to place an order with the Fritzing Fab service and to see what the result would end up like. This was straightforward for the most part: upload the file created by Fritzing, fill out some details (albeit not via a secure connection), and then proceed to payment. Unfortunately, the easy payment method involves PayPal, and unfortunately PayPal wants random people like myself to create an account with them before they will consider letting me make a credit card payment, which is something that didn’t happen before. Fortunately, the Fritzing people are most accommodating and do support wire transfers as an alternative payment method, and they were very responsive to my queries, so I managed to get an order submitted even more quickly than I thought might happen (considering that fabrication happens only once a week).

Just over a week after placing my order, the board was shipped from Germany, arriving a couple of days later here in Norway. Here is what it looked like:

The finished PCB from Fritzing

The finished PCB from Fritzing

Now, all I had to do was to populate the board and to test the circuit again with the Arduino. First, I tested the connections using the Arduino’s 5V and GND pins with an LED in series with a resistor in an “old school” approach to the problem, and everything seemed to be as I had defined it in the Fritzing software.

Given that I don’t really like soldering things, the act of populating the board went about as well as expected, even though I could still clean up the residue from the solder a bit (which would lead me onto a story about buying the recommended chemicals that I won’t bother you with). Here is the result of that activity:

The populated board together with the Arduino

The populated board together with the Arduino

And, all that remained was the task of getting my software running and testing the circuit in its new form. Originally, I was only using 16 address pins, holding the seventeenth low, and had to change the software to handle these extended addresses. In addition, the issuing of commands to the flash memory device probably needed a bit of refinement as well. Consequently, this testing went on for a bit longer than I would have wished, but eventually I managed to successfully replicate the programming of a ROM image that had been done some time ago with the breadboard circuit.

The outcome did rely on a certain degree of good fortune: the template for the Arduino Uno is not quite compatible with the Duemilanove, but this was rectified by clipping two superfluous pins from one of the headers I soldered onto the board; two of the connections belonging to the socket holding the flash memory chip touch the outside of the plastic “power jack” socket, but not enough to cause a real problem. But I would like to think that a lot of preparation dealt with problems that otherwise might have occurred.

Apart from liberating my breadboards and wires, this exercise has provided useful experience with PCB design. And of course, you can find the sources for all of this in my repository, as well as a project page for the board on the Fritzing projects site. I hope that this account of my experiences will encourage others to consider trying it out, too. It isn’t as scary as it would first appear, after all, although I won’t deny that it was quite a bit of work!

Connecting to a server’s web interface over SSH

the_unconventional's blog » English | 13:00, Thursday, 27 August 2015

Sometimes, I have to remotely administer servers. And sometimes, those servers run a daemon that has to be configured using a web interface – e.g. CUPS and ejabberd.

In order to connect to such a web interface, you need a browser, an IP address and a port. In essence, the web interface has to be publicly accessible for that – which is not something you’d usually want. (Firewall configuration, access control, security risks, possibly needing port forwarding, and so on.)

Now there are many ways to remotely administer servers using a GUI. All of which have one thing in common: they suck. Either they’re free software and they suck (VNC, X forwarding, …) or they’re proprietary and they suck even more (RDP, NX, TeamViewer, …)
And then there’s the whole issue of how it’s outright ridiculous to install a GUI on a server in the first place.


Using SSH to connect to web interfaces

Fortunately, one can easily bind a server’s local port to a client’s local port using nothing but SSH. This means you only need to use port 22, and you can use SSH pubkey authentication and encryption for everything you do.

All it takes is this command:

ssh [username]@[hostname] -T -L [random-local-port]:localhost:[desired-server-port]

For example, what if I wanted to access the CUPS admin page as user raspberry-pi on my Raspberry Pi, with hostname lithium and IP192.168.0.2, without having to allow TCP traffic on port 631?

The username would be raspberry-pi
The hostname would be lithium.local (or
The random local port can be anything, but I used 63789
The server port would be 631

That would mean:

ssh raspberry-pi@lithium.local -T -L 63789:localhost:631

I had already set up public key access years ago, so all I had to do was press Enter, open localhost:63789 on the client, and enjoy.

Killing a tunnel is also easy: just hit Ctrl + C and close the terminal.


Using SSH to connect to web interfaces on other servers in your server’s LAN

So what if you want to connect to the web interface of a daemon running on another server in the same LAN as the server you have SSH access to, while the server in question is not remotely accessible? Even that is possible!

Say, for example, your server is in a LAN and has as its internal IP address. will be forwarded to a random port of the external IP address (let’s say You have a local user account called cindy. Another server in the LAN has as its internal IP address, has chatserver as its hostname, and let’s say it runs the ejabberd web interface on the default port 5280.

You can once again use the familiar command:

ssh [username]@[external-ip] -p [external-port] -T -L [random-local-port]:[desired-server-in-the-lan]:[desired-server-port]

The username would be cindy
The external IP would be
The external port would be 4444
The random local port can be anything, but I used 5599
The other server in the LAN would be chatserver.local (or
The other server’s port would be 5280

That would mean:

ssh cindy@ -p 4444 -T -L 5599:


Using SSH as a SOCKS host

Sometimes, you can’t get by with mapping only a single port. (Think about SSL redirects, for instance.) In that case, you can use SSH to set up a SOCKS host and tell your browser to connect through that as a proxy.

Let’s say that you want to access multiple things in the LAN of a remote machine you have access to.

You can set up a tunnel by using this command:

ssh -N -D [random-local-port] [username]@[external-ip] -p [external-port]

Let’s use the same variables as before:

The username would be cindy
The external IP would be
The external port would be 4444
The random local port can be anything, but I used 5599

That would mean:

ssh -N -D 5599 cindy@ -p 4444

You can then tell your browser to use the SOCKS host on localhost:5599.

With Firefox, you go to Preferences > Advanced > Network > Settings > Manual proxy configuration.

With Chromium, the system proxy settings are inherited. So you’ll have to set up Network Manager and apply the proxy system-wide.

Don’t forget to undo the settings once you’re done!


The local admin is mean! They refuse to forward a port!

Sometimes, the local admin is mean and refuses to forward a port. In that case, you won’t be able to connect to any SSH server behind NAT. But fear not! SSH can still help you if the local admin cannot.

However, you’re going to need some infrastructure on your side. At least a publicly accessible server running SSH and preferably nothing else. Sure, if you have a static IP or something like DynDNS you could use your own computer at home, but I really wouldn’t recommend it.

In the best case scenario, you rent a very basic VPS somewhere, and do a minimal GNU/Linux server install on it. Install openssh-server and perhaps a firewall that blocks all traffic except port 22. Create a user account on that VPS with as little rights as possible. (Just its own home directory, and keep it out of the sudoers file.)

In this example, let’s say that I have a VPS somewhere with the IP address, and that I bought a domain name for it: I made a user account called adminsaremean.

Anyone can connect to this server if they’d guess the password (so choose a strong one). But there’s literally nothing interesting on the server, and the user account can’t do anything interesting either. However, it can be very valuable as a reverse SSH proxy server.

So how does one use a reverse SSH proxy server? Well, from the client side (the machine that wasn’t allowed to have its port forwarded), you connect to the publicly accessible proxy server on a random port. Let’s say the client’s user account is called hippopotamus. Then, you connect to the server from your computer. From that shell, you connect to localhost with the random port you chose.

First things first: connecting to your proxy. You’ll need to do this:

ssh -N -R [random-port]:localhost:22 [username]@[proxy-server]

The random port can be anything, but I chose 1234
The username will be adminsaremean
The proxy server will be

That would mean:

ssh -N -R 1234:localhost:22

Running this command will show nothing interesting. That’s exactly what we want.

Now, on to the server. The client, with user account hippopotamus, will be connected to port 1234, and you want to open a secure shell from your server to that client. This can be done with the following command (from the proxy server):

ssh [username]@localhost -p [random-port]

The username will be hippopotamus
The random port will be 1234

That would mean:

ssh hippopotamus@localhost -p 1234

Now you’ll be logged in to the remote client, even though it’s behind NAT and maybe even a firewall, no matter how mean the local admin is.


So how do I connect to a web interface now? This is really hard!

Connecting to a web interface this way, is really hard. You’ll need to create two tunnels: one from the client to the server, and one from your computer to the server.

So, we’re still connected from the client to port 1234 on the server. And we want, for instance, access to the CUPS admin page on the client.

In order to do so, you’ll first need a tunnel from port 631 on the client to a random port on the proxy server. Let’s use 5678 in this example:

ssh hippopotamus@localhost -p 1234 -T -L 5678:localhost:631

If you’re hardcore, you can now use Lynx or w3m to connect to localhost:5678 on your proxy server. But that’s not ideal, of course.

So from your computer, you connect to the proxy server and bind port 5678 (which is forwarded to 631 on the client) to another random local port. Let’s use 6070 in this example:

ssh -T -L 6070:localhost:5678

This would mean that port 6070 on your computer is forwarded to port 5678 on the proxy server, which in turn is forwarded to port 631 on localhost:1234 on the proxy server, which is actually port 22 on the remote client, meaning that the localhost:631 on the proxy server actually wasn’t the local host! Think about this right before you go to bed!


I have the remote client set up to only accept the public key on my laptop. Will this still work securely?

Because you’re using reverse SSH to connect to the server, this connection will be outgoing rather than incoming from the remote client’s end. However, it will treat any connection to the random port on the proxy server as an incoming SSH connection. So you will have to create a public key on the server with a very strong passphrase, and add it to ~/.ssh/authorized_keys on the remote client.


I don’t want my proxy server to allow password authentication. Will that work?

Sure. But you’ll need to create SSH keys for every remote client you ever want to manage and every computer you ever want to use to remotely manage those clients, and add those pubkeys to ~/.ssh/authorized_keys on your proxy server.


But if I use pubkey authentication, any clever user on the client side could log in to any other client, which may not even be theirs!

First off, there is no such thing as a clever user. ;)
Jokes aside: they’d still need your pubkey password, and you should of course never cache that on the proxy server.

But if this really bothers you, you can also create a separate user account on the proxy server with each their own SSH keys and authorized_keys files. They won’t have access to eachother’s home directories, but you will have a lot of work storing all those passwords and adding all those keys.

Wednesday, 26 August 2015

MAGIX: Rescue Your Videotapes!

PB's blog » en | 22:48, Wednesday, 26 August 2015

Recently, I’m getting an increased number of requests if the package offered by MAGIX for digitization and archiving of old video tapes is any good.

As technician at the National Video-Archive, I probably have certain demands regarding digitization quality and archive suitability of video material, which might seem “overkill” for most end-users (e.g. lossless codecs).

In order to produce digital copies of analog videos, this product might hardly be underbid.
Yet, I strongly question the long-term archiving properties (and quality) of the output formats of the MAGIX suite.

I’ve done some research regarding this MAGIX product, and I’ve encountered several things that one should know/consider, before buying this product.

Summary / Overview

  • Questionable quality of the analog/digital (A/D) converter
  • Exclusively lossy output formats
  • WMV, as well as optical carriers (DVD, Blu-Ray, etc) are absolutely inadequate as archive format
  • Unclear number of generation losses during record/editing/export

Possible alternative hybrid solution:
Use the MAGIX A/D converter stick with VirtualDub (see below) and FFV1 or DV as video codec. PCM uncompressed for audio.
Store the original export files on harddisks and DVD/Blu-Ray only for access copies.
This enables you to re-create DVD/Blu-Rays later on, in case they decay. Or convert your videos in the future to “then-common” video formats for viewing.
Without additional generation loss.

Supported audio/video formats:
Under “Technical Data > Data Formats“, a list of supported formats for video/audio/image are listed.

The formats listed there are only listed according to their file suffixes (e.g. AVI, MOV, MP3, OGG). This may seem simple(r) on first sight, but it’s lacking concrete information about the actually supported codecs.

A video file always consists (at least) of 3 components:

  1. container
  2. video-codec
  3. audio-codec

Despite the fact, that the list of video formats given by MAGIX is a mixture of container formats (AVI/MOV) and codecs (DV, MPEG-1, MPEG-2, WMV), the only format for file-export is “WMV(HD)”.
Additionally, there is no information given about how the audio is stored.

The list of audio formats only lists lossy (!) codecs like MP3/WMA/Vorbis for import.

Analog-digital (A/D) converter:
The analog video signal is being converted using a small, sweet USB-stick with video inputs.
I was not able to find publicly accessible information about technical details about this converter.

Open questions:

  • Does the A/D converter provide the uncompressed digital signal – or only the already lossy compressed version?
  • Same question for audio…
  • Does it preserve video fields accurately?
  • Does it preserve color information, or the chroma subsampling (e.g. 4:2:2 to 4:2:0)?

Although the A/D converter stick seems to be usable by other video applications (e.g. VirtualDub), it’s unclear if recording to another codec already contains a generation loss.
This would be very relevant for eventual post-editing (e.g. cropping, color corrections, audio corrections, etc), since one would be to accept at least 3 generation losses:

  • Loss #1: Lossy compression in A/D converter
  • Loss #2: Image-/audio-recording in lossy codec (WMV/WMA?)
  • Loss #3: Export to a lossy format (DVD, Blu-Ray, etc)

UPDATE (26.Aug.2015):
I was told by a user that the program provided an export to MPEG-2 in recent versions.
Unfortunately, this “program bug” was “fixed” by MAGIX.

Quote MAGIX support (Translated from German to English):

“If MPEG-2 was listed as export option, that is a bug which was corrected automatically with the next program start.
So there is no possibility to export the video as MPEG-2. Additionally, this function can also not be activated.”

MAGIX support also gave us the tip that the MPEG, generated by the A/D converter stick, would intermediately be stored in the “My Record” folder.
At least with this option, you would have had only one generation loss.

Unfortunately, this “program bug” was also “fixed”:
The original MPEG-2 is not accessible any more – the video is now transcoded directly to a 16:9 WMV/WMA format.

So in the current version, this adds another 2 quality losses:

  • Loss #4: Interpolation by upscaling from SD to HD (720×576 auf 1920×1080)
  • Loss #5: If there are no black borders added left/right, then cropping occurs

I hope that at least the audio is stored lossless (e.g. uncompressed PCM) before export.
Yet, this is uncertain.

For those who would (still) like to copy and preserve their videos in a safe(r) way, I’ve written down some options and background informations here:

Formats (more) suitable for archiving:
The best option, of course, is if you can store audio/video uncompressed or in a mathematically lossless codec (e.g. FFV1).
Currently, this might still be an issue for end-users, due to the rather huge data size (compared to lossy).
For example, FFV1/PCM in AVI requires about 90 GB for 4h VHS (~370 MB/min).

Of course it’s tempting to have smaller files – but that has its price.

In case one decides to use compression without any loss, any other codec than Windows Media is to be preferred. Due to its Microsoft origin, WMV/WMA is strongly bound to Windows, and due to this format’s licensing- und patent-obstacles it’s unclear if (and under which conditions) one is able to open these formats in the future.
For Non-Windows environments, the license costs for creators of applications/devices that can play (or convert) WMV is currently at 300.000 USD per year.
See: “Windows Media Components Product Agreement, page 12.

The best compromise would probably be “DV” (=lossy, but widespread open standard) as video codec and PCM (=uncompressed. Quasi “WAV”) for audio in AVI. That would approximately be 55 GB for 4h VHS (~230 MB/min).
Within a reasonable value-for-money range, a A/D converter like the ADVC55 might make sense.

As recording program “VirtualDub” could be used.
Audio should be recorded uncompressed (PCM) – and also stored in that format in the video container file.
Presets for recording DV in the most exact way can be downloaded here.
These settings are part of DVA-Profession, and are used at the Austrian Mediathek (the National Audio/Video Archive).

A general rule of thumb for long-term preservation of media formats is, that the implementation of an open format/standard under a Free Software license (e.g. GPL) has the highest chance to “virtually immortal”.

For example, if a media format is supported by the tool “FFmpeg“, your changes are very good :)

DVD/Blu-ray as physical carrier:
Here a short quote from the product page (Translated from German to English):

“Digital is better: Advantages of DVDs & Blu-ray Discs In addition to the large disk space, long service life, and small size, they do not have any sensitive mechanical components, making them ideal for archiving!”

Of course, the part about the mechanical components is correct, but saying it’s “ideal for archiving?
Theoretically “yes” – practically “no”.

The times where archives stored everything on optical carriers are long gone.
Mainly, because it has quickly shown that self burned optical carriers are way more fragile and short-lived than analog material, hard disks or magnetic tapes.
Burned disks that are not readable without errors after 2 years are not the exception. The higher the density of the carrier, the more fragile of course are is the data stored on it…

Furthermore one should distinguish between “data disk” and “video disk” – the same applies to CD, as well as DVD and Blu-Ray.
If one stores their videos on a video-disk (e.g. Video-DVD), the audio/video format – including resolution and aspect ratio – is mostly fixed.
Currently, these are exclusively lossy video codecs:

  • CD: MPEG-1
  • DVD: MPEG-2
  • Blu-Ray: MPEG-4 (H.264)

At the moment, there is (currently) no perfect carrier. Especially not for digital data.
For the time being, I would suggest to store the originally captured files on hard disks – and a copy on DVD/Blu-Ray only as access copy.
This allows to choose the archiving format separately from the access format, increasing ones chances to more easily and without additional generation loss convert the videos for viewing.

Aspect ratio:
Analog video is stored in “standard definition” (SD) resolution, and was always recorded with the aspect ratio of 4:3 – and that’s also the way the image is stored on the tape.

In screenshots on the MAGIX Website however, the video is exclusively displayed in 16:9:

Even if you have black borders at top/bottom (=<a href=""), the information on the tape originally isn't wide screen at all.

In Europe we have PAL as TV-/video norm.
If you digitize PAL-SD video, that usually results in a pixel resolution of 720×576. Due to quadratic pixel aspect ratio (PAR), that relates to 5:4 storage aspect ratio (SAR).

Even if one originally recorded 16:9 on e.g. DV (=”Digital Video”) or Digital Betacam, it is stored anamorph with 720×576 pixels (=5:4) – also not wide screen.

If 4:3 is stored as 16:9 full screen, information is always lost.

How does the MAGIX Rescue Your Videotapes handle that?
Does it automatically crop, or could one have black borders left/right (=”pillarbox“) instead?
That would at least offer a lossless video image format for archiving, in case one wants to convert to a 16:9 aspect ratio for viewing (e.g. Blu-Ray/HD).

I don’t even dare to ask how fields (half-images of a frame) and/or deinterlacing are handled…

Justa short remark regarding the “MXV” format for video:
At my work at the national video archive, we encounter a variety of most diverse video formats as source. Until today, MXV was unknown to me.

During my recherche, I wasn’t able to find technical details about it, except of these:

  • It’s a MAGIX-internal format. Probably a container or project format
  • There are probably no tools (except for MAGIX’) that can open/convert it
  • It seems to store video in lossy-only formats (MPEG-2)
  • Which audio format is uses is completely unclear. PCM? MP3? WMA? MXA?

If you should have stored your videos in this format, I suggest to export/convert it to an open format as soon as possible. It is absolutely unclear if (and what-with) one can open MXV in the future at all.
Unfortunately, it is not impossible that one loses quality during that conversion (due to additional, lossy compression during export).

Please don’t send complaints to me, but to MAGIX ;)

A Long Voyage into Groupware

Paul Boddie's Free Software-related blog » English | 15:25, Wednesday, 26 August 2015

A while ago, I noted that I had moved on from attempting to contribute to Kolab and had started to explore other ways of providing groupware through existing infrastructure options. Once upon a time, I had hoped that I could contribute to Kolab on the basis of things I mostly knew about, whilst being able to rely on the solution itself (and those who made it) to take care of the things I didn’t really know very much about.

But that just didn’t work out: I ultimately had to confront issues of reliably configuring Postfix, Cyrus, 389 Directory Server, and a variety of other things. Of course, I would have preferred it if they had just worked so that I could have got on with doing more interesting things.

Now, I understand that in order to pitch a total solution for someone’s groupware needs, one has to integrate various things, and to simplify that integration and to lower the accompanying support burden, it can help to make various choices on behalf of potential users. After all, if they don’t have a strong opinion about what kind of mail storage solution they should be using, or how their user database should be managed, it can save them from having to think about such things.

One certainly doesn’t want to tell potential users or customers that they first have to go off, read some “how to” documents, get some things working, and then come back and try and figure out how to integrate everything. If they were comfortable with all that, maybe they would have done it all already.

And one can also argue about whether Kolab augments and re-uses or merely replaces existing infrastructure. If the recommendation is that upon adopting Kolab, one replaces an existing Postfix installation with one that Kolab provides in one form or another, then maybe it is possible to re-use the infrastructure that is already there.

It is harder to make that case if one is already using something else like Exim, however, because Kolab doesn’t support Exim. Then, there is the matter of how those components are used in a comprehensive groupware solution. Despite people’s existing experiences with those components, it can quickly become a case of replacing the known with the unknown: configuring them to identify users of the system in a different way, or to store messages in a different fashion, and so on.

Incremental Investments

I don’t have such prior infrastructure investments, of course. And setting up an environment to experiment with such things didn’t involve replacing anything. But it is still worthwhile considering what kind of incremental changes are required to provide groupware capabilities to an existing e-mail infrastructure. After all, many of the concerns involved are orthogonal:

  • Where the mail gets stored has little to do with how recipients are identified
  • How recipients are identified has little to do with how the mail is sent and received
  • How recipients actually view their messages and calendars has little to do with any of the above

Where components must talk to one another, we have the benefit of standards and standardised protocols and interfaces. And we also have a choice amongst these things as well.

So, what if someone has a mail server delivering messages to local users with traditional Unix mailboxes? Does it make sense for them to schedule events and appointments via e-mail? Must they migrate to another mail storage solution? Do they have to start using LDAP to identify each other?

Ideally, such potential users should be able to retain most of their configuration investments, adding the minimum necessary to gain the new functionality, which in this case would merely be the ability to communicate and coordinate event information. Never mind the idea that potential users would be “better off” adopting LDAP to do so, or whichever other peripheral technology seems attractive for some kinds of users, because it is “good practice” or “good experience for the enterprise world” and that they “might as well do it now”.

The difference between an easily-approachable solution and one where people give you a long list of chores to do first (involving things that are supposedly good for you) is more or less equivalent to the difference between you trying out a groupware solution or just not bothering with groupware features at all. So, it really does make sense as someone providing a solution to make things as easy as possible for people to get started, instead of effectively turning them away at the door.

Some Distractions

But first, let us address some of the distractions that usually enter the picture. A while ago, I had the displeasure of being confronted with the notion of “integrated e-mail and calendar” solutions, and it turned out that such terminology is coined as a form of euphemism for adopting proprietary, vendor-controlled products that offer some kind of lifestyle validation for people with relatively limited imagination or experience: another aspirational possession to acquire, and with it the gradual corruption of organisations with products that shun interoperability and ultimately limit flexibility and choice.

When standards-based calendaring has always involved e-mail, such talk of “integrated calendars” can most charitably be regarded as a clumsy way of asking for something else, namely an e-mail client that also shows calendars, and in the above case, the one that various people already happened to be using that they wanted to impose on everyone else as well. But the historical reality of the integration of calendars and e-mail has always involved e-mails inviting people to events, and those people receiving and responding to those invitation e-mails. That is all there is to it!

But in recent times, the way in which people’s calendars are managed and the way in which notifications about events are produced has come to involve “a server”. Now, some people believe that using a calendar must involve “a server” and that organising events must also involve doing so “on the server”, and that if one is going to start inviting people to things then they must also be present “on the same server”, but it is clear from the standards documents that “a server” was never a prerequisite for anything: they define mechanisms for scheduling based on peer-to-peer interactions through some unspecified medium, with e-mail as a specific medium defined in a standards document of its own.

Having “a server” is, of course, a convenient way for the big proprietary software vendors to sell their “big server” software, particularly if it encourages the customer to corrupt various other organisations with which they do business, but let us ignore that particular area of misbehaviour and consider the technical and organisational justifications for “the server”. And here, “server” does not mean a mail server, with all the asynchronous exchanges of information that the mail system brings with it: it is the Web server, at least in the standards-adhering realm, that is usually the kind of server being proposed.

Computer components

The Superfluous Server

Given that you can send and receive messages containing events and other calendar-related things, and given that you can organise attendance of events over e-mail, what would the benefit of another kind of server be, exactly? Well, given that you might store your messages on a server supporting POP or IMAP (in the standards-employing realm, that is), one argument is that you might need somewhere to store your calendar in a similar way.

But aside from the need for messages to be queued somewhere while they await delivery, there is no requirement for messages to stay on the server. Indeed, POP server usage typically involves downloading messages rather than leaving them on the server. Similarly, one could store and access calendar information locally rather than having to go and ask a server about event details all the time. Various programs have supported such things for a long time.

Another argument for a server involves it having the job of maintaining a central repository of calendar and event details, where the “global knowledge” it has about everybody’s schedules can be used for maximum effect. So, if someone is planning a meeting and wants to know when the potential participants are available, they can instantly look such availability information up and decide on a time that is likely to be acceptable to everyone.

Now, in principle, this latter idea of being able to get everybody’s availability information quickly is rather compelling. But although a central repository of calendars could provide such information, it does not necessarily mean that a centralised calendar server is a prerequisite for convenient access to availability data. Indeed, the exchange of such data – referred to as “free/busy” data in the various standards – was defined for e-mail (and in general) at the end of the last century, although e-mail clients that can handle calendar data typically neglect to support such data exchanges, perhaps because it can be argued that e-mail might not obtain availability information quickly enough for someone impatiently scheduling a meeting.

But then again, the routine sharing of such information over e-mail, plus the caching of such data once received, would eliminate most legitimate concerns about being able to get at it quickly enough. And at least this mechanism would facilitate the sharing of such data between organisations, whereas people relying on different servers for such services might not be able to ask each other’s servers about such things (unless they have first implemented exotic and demanding mechanisms to do so). Even if a quick-to-answer service provided by, say, a Web server is desirable, there is nothing to stop e-mail programs from publishing availability details directly to the server over the Web and downloading them over the Web. Indeed, this has been done in certain calendar-capable e-mail clients for some time, too, and we will return to this later.

And so, this brings us to perhaps the real reason why some people regard a server as attractive: to have all the data residing in one place where it can potentially be inspected by people in an organisation who feel that they need to know what everyone is doing. Of course, there might be other benefits: backing up the data would involve accessing one location in the system architecture instead of potentially many different places, and thus it might avoid the need for a more thorough backup regime (that organisations might actually have, anyway). But the temptation to look and even change people’s schedules directly – invite them all to a mandatory meeting without asking, for example – is too great for some kinds of leadership.

With few truly-compelling reasons for a centralised server approach, it is interesting to see that many Free Software calendar solutions do actually use the server-centric CalDAV standard. Maybe it is just the way of the world that Web-centric solutions proliferate, requiring additional standardisation to cover situations already standardised in general and for e-mail specifically. There are also solutions, Free Software and otherwise, that may or may not provide CalDAV support but which depend on calendar details being stored in IMAP-accessible mail stores: Kolab does this, but also provides a CalDAV front-end, largely for the benefit of mobile and third-party clients.

Decentralisation on Demand

Ignoring, then, the supposed requirement of a central all-knowing server, and just going along with the e-mail infrastructure we already have, we do actually have the basis for a usable calendar environment already, more or less:

  • People can share events with each other (using iCalendar)
  • People can schedule events amongst themselves (using iTIP, specifically iMIP)
  • People can find out each other’s availability to make the scheduling more efficient (preferably using iTIP but also in other ways)

Doing it this way also allows users to opt out of any centralised mechanisms – even if only provided for convenience – that are coordinating calendar-related activities in any given organisation. If someone wants to manage their own calendar locally and not have anything in the infrastructure trying to help them, they should be able to take that route. Of course, this requires them to have capable-enough software to handle calendar data, which can be something of an issue.

That Availability Problem Mentioned Earlier

For instance, finding an e-mail program that knows how to send requests for free/busy information is a challenge, even though there are programs (possibly augmented with add-ons) that can send and understand events and other kinds of objects. In such circumstances, workarounds are required: one that I have implemented for the Lightning add-on for Thunderbird (or the Iceowl add-on for Icedove, if you use Debian) fetches free/busy details from a Web site, and it is also able to look up the necessary location of those details using LDAP. So, the resulting workflow looks like this:

  1. Create or open an event.
  2. Add someone to the list of people invited to that event.
  3. View that person’s availability.
  4. Lightning now uses the LDAP database to discover the free/busy URL.
  5. It then visits the free/busy URL and obtains the data.
  6. Finally, it presents the data in the availability panel.

Without LDAP, the free/busy URL could be obtained from a vCard property instead. In case you’re wondering, all of this is actually standardised or at least formalised to the level of a standard (for LDAP and for vCard).

If only I had the patience, I would add support for free/busy message exchange to Thunderbird, just as RFC 6047 would have had us do all along, and then the workflow would look like this:

  1. Create or open an event.
  2. Add someone to the list of people invited to an event.
  3. View that person’s availability.
  4. Lightning now uses the cached free/busy data already received via e-mail for the person, or it could send an e-mail to request it.
  5. It now presents any cached data. If it had to send a request, maybe a response is returned while the dialogue is still open.

Some people might argue that this is simply inadequate for “real-world needs”, but they forget that various calendar clients are likely to ask for availability data from some nominated server in an asynchronous fashion, anyway. That’s how a lot of this software is designed these days – Thunderbird uses callbacks everywhere – and there is no guarantee that a response will be instant.

Moreover, a request over e-mail to a recipient within the same organisation, which is where one might expect to get someone’s free/busy details almost instantly, could be serviced relatively quickly by an automated mechanism providing such information for those who are comfortable with it doing so. We will look at such automated mechanisms in a moment.

So, there are plenty of acceptable solutions that use different grades of decentralisation without needing to resort to that “big server” approach, if only to help out clients which don’t have the features one would ideally want to use. And there are ways of making the mail infrastructure smarter as well, not just to provide workarounds for clients, but also to provide genuinely useful functionality.

Public holidays

Agents and Automation

Groupware solutions offer more than just a simple means for people to organise things with each other: they also offer the means to coordinate access to organisational resources. Traditionally, resources such as meeting rooms, but potentially anything that could be borrowed or allocated, would have access administered using sign-up sheets and other simple mechanisms, possibly overseen by someone in a secretarial role. Such work can now be done automatically, and if we’re going to do everything via e-mail, the natural point of integrating such work is also within the mail system.

This is, in fact, one of the things that got me interested in Kolab to begin with. Once upon a time, back at the end of my degree studies, my final project concerned mobile software agents: code that was transmitted by e-mail to run once received (in a safe fashion) in order to perform some task. Although we aren’t dealing with mobile code here, the notion still applies that an e-mail is sent to an address in order for a task to be performed by the recipient. Instead of some code sent in the message performing the task, it is the code already deployed and acting as the recipient that determines the effect of the transaction by using the event information given in the message and combining it with the schedule information it already has.

Such agents, acting in response to messages sent to particular e-mail addresses, need knowledge about schedules and policies, but once again it is interesting to consider how much information and how many infrastructure dependencies they really need within a particular environment:

  • Agents can be recipients of e-mail, waiting for booking requests
  • Agents will respond to requests over e-mail
  • Agents can manage their own schedule and availability
  • Other aspects of their operation might require some integration with systems having some organisational knowledge

In other words, we could implement such agents as message handlers operating within the mail infrastructure. Can this be done conveniently? Of course: things like mail filtering happen routinely these days, and many kinds of systems take advantage of such mechanisms so that they can be notified by incoming messages over e-mail. Can this be done in a fairly general way? Certainly: despite the existence of fancy mechanisms involving daemons and sockets, it appears that mail transport agents (MTAs) like Postfix and Exim tend to support the invocation of programs as the least demanding way of handling incoming mail.

The Missing Pieces

So what specifically is needed to provide calendaring features for e-mail users in an incremental and relatively non-invasive way? If everyone is using e-mail software that understands calendar information and can perform scheduling, the only remaining obstacles are the provision of free/busy data and, for those who need it, the provision of agents for automated scheduling of resources and other typically-inanimate things.

Since those agents are interesting (at least to me), and since they may be implemented as e-mail handler programs, let us first look at what they would do. A conversation with an agent listening to mail on an address representing a resource would work like this (ignoring sanity checks and the potential for mischief):

  1. Someone sends a request to an address to book a resource, whereupon the agent provided by a handler program examines the incoming message.
  2. The agent figures out which periods of time are involved.
  3. The agent then checks its schedule to see if the requested periods are free for the resource.
  4. Where the periods can be booked, the agent replies indicating the “attendance” of the resource (that it reserves the resource). Otherwise, the agent replies “declining” the invitation (and does not reserve the resource).

With the agent needing to maintain a schedule for a resource, it is relatively straightforward for that information to be published in another form as free/busy data. It could be done through the sending of e-mail messages, but it could also be done by putting the data in a location served by a Web server. And so, details of the resource’s availability could be retrieved over the Web by e-mail programs that elect to retrieve such information in that way.

But what about the people who are invited to events? If their mail software cannot prepare free/busy descriptions and send such information to other people, how might their availability be determined? Well, using similar techniques to those employed during the above conversation, we can seek to deduce the necessary information by providing a handler program that examines outgoing messages:

  1. Someone sends a request to schedule an event.
  2. The request is sent to its recipients. Meanwhile, it is inspected by a handler program that determines the periods of time involved and the sender’s involvement in those periods.
  3. If the sender is attending the event, the program notes the sender’s “busy” status for the affected periods.

Similarly, when a person responds to a request, they will indicate their attendance and thus “busy” status for the affected periods. And by inspecting the outgoing response, the handler will get to know whether the person is going to be busy or not during those periods. And as a result, the handler is in a position to publish this data, either over e-mail or on the Web.

Mail handler programs can also be used to act upon messages received by individuals, too, just as is done for resources, and so a handler could automatically respond to e-mail requests for a person’s free/busy details (if that person chose to allow this). Such programs could even help support separate calendar management interfaces for those people whose mail software doesn’t understand anything at all about calendars and events.

Lifting materials for rooftop building activities

Building on Top

So, by just adding a few handler programs to a mail system, we can support the following:

  • Free/busy publishing and sharing for people whose software doesn’t support it already
  • Autonomous agents managing resource availability
  • Calendar interfaces for people without calendar-aware mail programs

Beyond some existing part of the mail system deciding who can receive mail and telling these programs about it, they do not need to care about how an organisation’s user information is managed. And through standardised interfaces, these programs can send messages off for storage without knowing what kind of system is involved in performing that task.

With such an approach, one can dip one’s toe into the ocean of possibilities and gradually paddle into deeper waters, instead of having to commit to the triathlon that multi-system configuration can often turn out to be. There will always be configuration issues, and help will inevitably be required to deal with them, but they will hopefully not be bound up in one big package that leads to the cry for help of “my groupware server doesn’t work any more, what has changed since I last configured it?” that one risks with solutions that try to solve every problem – real or imagined – all at the same time.

I don’t think things like calendaring should have to be introduced with a big fanfare, lots of change, a new “big box” product introducing different system components, and a stiff dose of upheaval for administrators and end-users. So, I intend to make my work available around this approach to groupware, not because I necessarily think it is superior to anything else, but because Free Software development should involve having a conversation about the different kinds of solutions that might meet user needs: a conversation that I suspect hasn’t really been had, or which ended in jeering about e-mail being a dead technology, replaced by more fashionable “social” or “responsive” technologies; a bizarre conclusion indeed when you aren’t even able to get an account with most fancy social networking services without an e-mail address.

It is possible that no-one but myself sees any merit in the strategy I have described, but maybe certain elements might prove useful or educational to others interested in similar things. And perhaps groupware will be less mysterious and more mundane as a result: not something that is exclusive to fancy cloud services, but something that is just another tiny part of the software available through each user’s and each organisation’s chosen operating system distribution, doing the hard work and not making a fuss about it. Which, like e-mail, is what it probably should have been all along.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Interviews  FSFE Fellowship Vienna » English  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  I LOVE IT HERE » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes » » Free Software  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog