Planet Fellowship (en)

Saturday, 24 September 2016

Azure from Debian

Planet Fsfe on Iain R. Learmonth | 22:03, Saturday, 24 September 2016

Around a week ago, I started to play with programmatically controlling Azure. I needed to create and destroy a bunch of VMs over and over again, and this seemed like something I would want to automate once instead of doing manually and repeatedly. I started to look into the azure-sdk-for-python and mentioned that I wanted to look into this in #debian-python. ardumont from Software Heritage noticed me, and was planning to package azure-storage-python. We joined forces and started a packaging team for Azure-related software.

I spoke with the upstream developer of the azure-sdk-for-python and he pointed me towards azure-cli. It looked to me that this fit my use case better than the SDK alone, as it had the high level commands I was looking for.

Between me and ardumont, in the space of just under a week, we have now packaged: python-msrest (#838121), python-msrestazure (#838122), python-azure (#838101), python-azure-storage (#838135), python-adal (#838716), python-applicationinsights (#838717) and finally azure-cli (#838708). Some of these packages are still in the NEW queue at the time I’m writing this, but I don’t foresee any issues with these packages entering unstable.

azure-cli, as we have packaged, is the new Python-based CLI for Azure. The Microsoft developers gave it the tagline of “our next generation multi-platform command line experience for Azure”. In the short time I’ve been using it I’ve been very impressed with it.

In order to set it up initially, you have to configure a couple of of defaults using az configure. After that, you need to az login which again is an entirely painless process as long as you have a web browser handy in order to perform the login.

After those two steps, you’re only two commands away from deploying a Debian virtual machine:

az resource group create -n testgroup -l "West US"
az vm create -n testvm -g testgroup --image credativ:Debian:8:latest --authentication-type ssh

This will create a resource group, and then create a VM within that resource group with a user automatically created with your current username and with your SSH public key (~/.ssh/id_rsa.pub) automatically installed. Once it returns you the IP address, you can SSH in straight away.

Looking forward to some next steps for Debian on Azure, I’d like to get images built for Azure using vmdebootstrap and I’ll be exploring this in the lead up to, and at, the upcoming vmdebootstrap sprint in Cambridge, UK later in the year (still being organised).

Friday, 23 September 2016

The perfect Btrfs setup for a server

Seravo | 08:42, Friday, 23 September 2016

Btrfs is probably the most modern filesystem of all widely used filesystems on Linux. In this article we explain how to use Btrfs as the only filesystem on a server machine, and how that enables some sweet capabilities, like very resilient RAID-1, flexible adding or replacing of disk drives, using snapshots for quick backups and […]

Thursday, 15 September 2016

FSFE Summit 2016: See you next year in Eberswalde

Florian Snows Blog » en | 06:13, Thursday, 15 September 2016

Matthias during the opening

From September 2 through 4, the FSFE Summit 2016 took place at the BCC in Berlin, Germany. It was part of QtConand thus, multiple Free Software communities had the chance to meet and exchange ideas.

The audience thanked Cellini during the opening for organizing the summit

Just like at FOSDEM, a very important part for me was the social component at the summit. Meeting new people, talking to people I only know from mailing lists, seeing people again that I met at other events, and experiencing a sense of community that goes beyond what I can usually see at the local level, is a great experience and an amazing motivation to keep on going.

So when I arrived on Thursday, the first thing I did was join everyone from the FSFE office in Berlin for lunch. After that, I was allowed to work from the FSFE office a little bit to test a few more things for the blog migration. This is where I remembered that I had forgotten to sign up as a volunteer, but Cellini was kind enough to still let me do that and explain the basics of what to do.

André’s talk about translations…

Of course, I also saw some great talks at the summit. Cryptie and André gave a good overview of the new translation system that Julien set up and that is currently going through extensive testing by some translation teams.

…together with Cryptie

 

 

 

 

There was a very interesting talk about the anthropology of Free Software. When I find the time, I study philosophy (usually just by myself) and it was really nice to be able to ask a question about transcendentalism (Kant and Fichte) and get the impression that not only the speaker understands the question, but also large parts of the audience. Not that a question needs an audience, but it is nice to know other people may have thought about similar ideas because that also strengthens the sense of community.

Christian presenting the Plussy display

Christian, also a coordinator of the local group in Franconia, gave a talk about the Plussy booth attractor he designed and built and he had a chance to make one for the group in Frankfurt right after the summit.

The booth with some FSFE staff
A lot of merchandise…

The Plussy booth attractor was successful at the FSFE booth at the summit as well. Christian decided it would be nice to use it there and a lot of people came by to ask about it. Helping out at the booth was a really great experience as well. While I am not as good a salesman as some people with more experience (Cryptie), I immensely enjoyed talking to people and explaining Free Software and its terminology.

…at the booth

At the booth, i also found an upgrade for my Libreboot notebook: a wireless network card that supports both 5 GHz and 2.4 GHz from Tehnoetic.

In between the talks, the BCC served really amazing food that not only looked awesome, but was also incredibly delicious. That is why a small group of us sat together over lunch after the talk about women in Free Software and we discussed how our communities can be more inclusive and how some communities are already doing a good job at that, but others are not as successful. I found it very interesting that in some countries, women are in the majority in IT jobs and the reason behind that appears to be a combination of economic pressure and remnants of the communist idea that women are workers as well and need to be just as productive as men.

Jonas talked about the history of the FSFE

On Saturday, the FSFE celebrated it’s 15th anniversary. There was an event at the BCC in which past and present presidents gave an overview of their work and shared personal experiences they had in the last 15 years. Jonas, the executive director of the FSFE also shared some interesting stories of his involvement. After that, we went on to C-Base for the actual celebration with food and drinks. Unfortunately, some people were missing from that celebration. I would have liked it very much if we could all have gotten together for this anniversary and celebrated our achievements. It would have been nice to talk to some of the people who insisted on having a local group in Franconia and speak about current developments in that regard. Sadly, that did not happen but perhaps we can meet again at another Free Software related event. Also at the party, there was an award ceremony [careful, link to non-free video site] in which so-called local heroes were honored for their contribution. It was quite an illustrious crowd with Guido Arnold for his hard work on a local group and in the education team, André Ockers for translating huge chunks of the FSFE website to Dutch, Simon Wächter for his work in Switzerland in general and his specific involvement in the Freedomvote project, Cryptie for her translations and volunteering and numerous booths, and several more people who do important work for the FSFE. I was also part of that group and I felt very honored, but also a bit out of place because even though I know that this was a selection of volunteers because not everyone can be honored, I always feel like I should do more for Free Software than I can get around to. Sometimes, there are even weeks where I don’t do anything and even when I am active, there is always someone else involved and there are very few instances where someone completes a task without help from others. That’s what’s nice about being part of a larger community and so I see this award as a thank you to the whole Free Software community and its many volunteers.

The two main organizers

Speaking of volunteering: That was a great experience as well. On Friday, we received a short introduction on how to present guests and how to host a room. The instructions were helpful and from my perspective as a volunteer, everything went smoothly. Of course I know that this was just my impression and for example Cellini might tell you a different story because at any given event of a certain size, there will be some last-minute changes or someone who does not show up or a number of things like that. However, everything was handled perfectly and especially Cellini did a great job of handling it all with a smile. That is also what Erik said about organizing the summit together with her. I am sure the two of them must have been incredibly stressed at times, but they never showed it and their hard work made the summit a great success.

Erik and Cellini at the feedback session

On Sunday night, after we had packed up the booth, we went out for dinner and drinks again, we complained about the terrible user interface of the ordering terminal at our table, and we generally had a lot of fun hanging out and sharing stories in a small group until the bar closed. Overall, I had a great time and as Cryptie put it, recharged my FSFE batteries. Now I am a bit sad that the party is over, but I guess that just makes me look forward more to a potential next summit. If we follow the advice of our communications officer Paul Brown, the next summit could be in a town such as Eberswalde which is close enough to a bigger city to make travel easy for everyone, but small enough that nothing ever happens there and so we would not compete with other events for press coverage. So with this idea in mind, let me say goodbye and see you next time, perhaps at an Eberswalde near you.

Wednesday, 14 September 2016

Copyright reform: Mo money to publishers, mo problems for everyone else

polina's blog | 15:56, Wednesday, 14 September 2016

In the last week of August 2016, two major leaks of the EU copyright reform became public: draft Impact Assessment, and draft Proposal for a Directive on copyright in the digital single market. FSFE has previously followed the reform on several occasions and provided the comments on Parliamentary own-initiative report, and within the general comments on Digital Single Market strategy.

In our assessments we asked for making clear that no exception to copyright should be ever limited by Digital Restrictions Management (DRM), to provide for a fully harmonised set of exceptions, including for currently uncertain situation with Text and Data Mining (TDM), and to strengthen the principle of technological neutrality.

The draft Proposal together with the draft Impact Assessment, however, is far from actually tackling the existing problems with outdated copyright protection. Furthermore, they seem like a cry out of threatened businesses to secure their place under the sun at the expense of others. Instead of dealing with the problems that are actually hampering the EU from achieving the desired digital single market, the proposed reform conveniently “backed up” with contradictory Impact Assessment, ignores the existing problems, disregards fundamental rights and leans towards reinforcing the same issues in a larger and more “harmonised” way.

Text and Data Mining – new field for DRM and beyond?

The purpose of ongoing copyright reform is inter alia addressing existing disparities between different member states and to bring more legal clarity on copyright into the digital sphere. FSFE also supported that reasoning and asked the EC to uphold these plans for uniform rules across the EU for the interpretation of exceptions and limitations. This inter alia is most likely achievable by introducing legislative requirements of mandatory exceptions, as this will not leave any space to manuoeuvre within the member states, allowing the necessary level of harmonisation across the EU.

In particular, we argued for an explicit right to extract data from lawfully accessed protected work. The Proposal includes a mandatory exception for uses of text and data mining technologies in the field of scientific research, illustration for teaching, and for preservation of cultural heritage. The draft Proposal does include a reference to the fact that licensing, including open licences do not address the question of text and data mining with the necessary clarity, as they often just do not have any reference to TDM.

The mandatory exception for TDM is therefore a welcomed approach. The downside with the exception as proposed by the EC is however the fact that it is only granted to ‘research organisations’ – university, a research institute or any other non-profit organisation of a public interest with the primary goal of conducting scientific research and to provide educational services. The scope of such mandatory exception is hence limited, excluding everyone else with lawful access to protected works (citizens, SMEs etc).
The exception of TDM directed to everyone who has lawfully accessed the work, according to the Commission, is unfavourable simply because “this option would have a strong negative effect on publisher’s TDM licensing market”. Ignoring the benefits to innovation, and the fact that such exception would open opportunities for more businesses, the EC is evidently in favour of securing existing situation of publishers’ taking advantage of current legal unclarity.

Technical safeguards

In addition, TDM exception has a reference to the right of rightholders to apply technical safeguards to their works, as according to the Draft Proposal, new TDM exceptions has a potential to inflict “high number of access requests to an downloads of their works”. This is a reference to so-called Digital Restrictions Management (DRM) that is widely used by the rightholders to arbitrarily restrict users’ freedom to access and use the lawfully acquired works and devices within the limits of copyright law. A slight hope to restrict this arbitrary practice at least in TDM exception is contained in the second part of the proposed provision which requires that “such measures shall not go beyond what is necessary” to achieve the objective to ensure security and integrity of the networks and databases where the works are hosted. In addition, these measures should not undermine the effective application of the exception.

Whilst it is definitely a better approach to address the lawfulness of DRM and include a safeguard for an effective application of the exception, it is however, a worrisome direction to see such requirement in a copyright regulation. It is evident that the rightholders would need to ensure the technical security and integrity of their databases and networks in any case, irrespective whether the users’ access their works under a legitimate TDM exception or any other use. This vague provision sounds like rightholders can receive a wide-reaching right in name of copyright to apply any technical measure that is “necessary” to safeguard their right. The provision lacks the requirement of proportionality in addition to necessity: when the measure is not only necessary to stop the unlawful access to the database but is also proportionate with the alleged aim and purpose of such measure.

Link Tax – EC says ‘no’ but means ‘yes’

The tensions between hyperlinks and copyright have been on agenda of European Court of Justice on several occasions: when can a link to copyrighted material constitute a copyright infringement and why? The reform of existing European copyright rules can be seen as an opportunity to bring some clarity to the question and secure the fundamental principle of internet: linking, i.e. as in mere referencing to or quoting *existing* content online shall be considered a lawful use of protected work per se.

However, the EC decided to go after hyperlinks from a different direction: in the name of *holy* news publishers who are losing their revenues because of online uses to “ensure the sustainability of the publishing industry”. In a nutshell, news publishers are granted with so-called “neighbouring rights” for the reproduction and and making available to the public of publications in respect of online uses. This means that news publishers get exclusive rights to prohibit any reproduction or “the communication to the public” of their stories: including the snippets of the text or hyperlinks. According to the EU case-law, the text as long as 11 words is considered to be “literary work” protected by copyright laws. Publishers enjoying such broad and widespread right without any counterbalance from the other side is a serious threat to the existing online environment and to the internet we know, not mentioning the implications of freedom of expression and the diversity of media.

According to the Impact Assessment, publishers are currently in the most disadvantageous situation, as they “rely on authors’ copyright that is transferred to them”. When did copyright become the right to maximise the revenues of struggling business models making their money off creativity of other people? Furthermore, the Impact Assessment acknowledges the fact that so-called “ancillary rights” for publishers, already introduced in Spain (i.e. compensation to publishers from online service providers) and Germany (i.e. exclusive right covering specifically the making available of press products to the public) , have not proven effective to address publishers’ problems so far, in particular as they have not resulted in increased revenues for publishers from the major online service providers. Yet, the EC is convinced that the best solution would be to just combine two failed solutions and impose it on the rest of the member states.

Conclusion

The leaked documents indicate the worrisome direction taken by the EC in order to bring the EU to digital single market. Unfortunately, the EC is disregarding everything that can help the EU to enhance its digital environment. Instead of acknowledging the change internet has brought to the use and distribution of copyrighted material, the EC is frantically trying to secure the interests of fading businesses and their revenues first, rather than authors.

UPDATE: The official published documents on the reform confirm the plans of the European Commission as indicated by the leaks in unchanged form.


Image: Tom Morris, CC-BY-SA-3.0.

Tuesday, 06 September 2016

Candy from Strangers

Elena ``of Valhalla'' | 18:46, Tuesday, 06 September 2016

Candy from Strangers

A few days ago I gave a talk at ESC https://www.endsummercamp.org/ about some reasons why I think that using software and especially libraries from the packages of a community managed distribution is important and much better than alternatives such as pypi, nmp etc. This article is a translation of what I planned to say before forgetting bits of it and luckily adding it back as an answer to a question :)

When I was young, my parents taught me not to accept candy from strangers, unless they were present and approved of it, because there was a small risk of very bad things happening. It was of course a simplistic rule, but it had to be easy enough to follow for somebody who wasn't proficient (yet) in the subtleties of social interactions.

One of the reasons why it worked well was that following it wasn't a big burden: at home candy was plenty and actual offers were rare: I only remember missing one piece of candy because of it, and while it may have been a great one, the ones I could have at home were also good.

Contrary to candy, offers of gratis software from random strangers are quite common: from suspicious looking websites to legit and professional looking ones, to platforms that are explicitly designed to allow developers to publish their own software with little or no checks.

Just like candy, there is also a source of trusted software in the Linux distributions, especially those lead by a community: I mention mostly Debian because it's the one I know best, but the same principles apply to Fedora and, to some measure, to most of the other distributions. Like good parents, distributions can be wrong, and they do leave room for older children (and proficient users) to make their own choices, but still provide a safe default.

Among the unsafe sources there are many different cases and while they do share some of the risks, they have different targets with different issues; for brevity the scope of this article is limited to the ones that mostly concern software developers: language specific package managers and software distribution platforms like PyPi, npm and rubygems etc.

These platforms are extremely convenient both for the writers of libraries, who are enabled to publish their work with minor hassles, and for the people who use such libraries, because they provide an easy way to install and use an huge amount of code. They are of course also an excellent place for distributions to find new libraries to package and distribute, and this I agree is a good thing.

What I however believe is that getting code from such sources and using it without carefully checking it is even more risky than accepting candy from a random stranger on the street in an unfamiliar neighbourhood.

The risk aren't trivial: while you probably won't be taken as an hostage for ransom, your data could be, or your devices and the ones who run your programs could be used in some criminal act causing at least some monetary damage both to yourself and to society at large.

If you're writing code that should be maintained in time there are also other risks even when no malice is involved, because each package on these platform has a different policy with regards to updates, their backwards compatibility and what can be expected in case an old version is found to have security issues.

The very fact that everybody can publish anything on such platforms is both their biggest strength and their main source of vulnerability: while most of the people who publish their libraries do so with good intentions, attacks have been described and publicly tested, such as the fun typo-squatting http://incolumitas.com/2016/06/08/typosquatting-package-managers/ one (archived URL http://incolumitas.com/2016/06/08/typosquatting-package-managers/" target="_blank">http://web.archive.org/web/20160801161807/http://incolumitas.com/2016/06/08/typosquatting-package-managers/) that published harmless malicious code under common typos for famous libraries.

Contrast this with Debian, where everybody can contribute, but before they are allowed full unsupervised access to the archive they have to establish a relationship with the rest of the community, which includes meeting other developers in real life, at the very least to get their gpg keys signed.

This doesn't prevent malicious people from introducing software, but raises significantly the effort required to do so, and once caught people can usually be much more effectively prevented from repeating it than a simple ban on an online-only account can do.

It is true that not every Debian maintainer actually does a full code review of everything that they allow in the archive, and in some cases it would be unreasonable to expect it, but in most cases they are at least reasonably familiar with the code to do at least bug triage, and most importantly they are in an excellent position to establish a relationship of mutual trust with the upstream authors.

Additionally, package maintainers don't work in isolation: a growing number of packages are being maintained by a team of people, and most importantly there are aspects that involve potentially the whole community, from the fact that new packages that enter the distribution are publicity announced on a mailing list to the various distribution-wide QA efforts.

Going back to the language specific distribution platforms, sometimes even the people who manage the platform themselves can't be fully trusted to do the right thing: I believe everybody in the field remembers the npm fiasco https://lwn.net/Articles/681410/ where a lawyer letter requesting the removal of a package started a series of events that resulted in potentially breaking a huge amount of automated build systems.

Here some of the problems were caused by some technical policies that caused the whole ecosystem to be especially vulnerable, but one big issue was the fact that the managers of the npm platform are a private entity with no oversight from the user community.

Here not all distributions are equal, but contrast this with Debian, where the distribution is managed by a community that is based on a social contract https://www.debian.org/social_contract and is governed via democratic procedures established in its https://www.debian.org/devel/constitution.

Additionally, the long history of the distribution model means that many issues have already been met, the errors have already been done, and there are established technical procedures to deal with them in a better way.

So, shouldn't we use language specific distribution platforms at all? No! As developers we aren't children, we are adults who have the skills to distinguish between safe and unsafe libraries just as well as the average distribution maintainer can do. What I believe we should do is stop treating them as a safe source that can be used blindly and reserve that status to actual trustful sources like Debian, falling back to the language specific platforms only when strictly needed, and in that case:

actually check carefully what we are using, both by reading the code and by analysing the development and community practices of the authors;
if possible, share that work by becoming ourselves maintainers of that library in our favourite distribution, to prevent duplication of effort and to give back to the community whose work we get advantage from.

Friday, 02 September 2016

Internship changes in the FSFE

free software - Bits of Freedom | 10:29, Friday, 02 September 2016

Internship changes in the FSFE

If you're interested in applying for an internship in the FSFE, now is not the time to apply! :-) We've just introduced some changes to our internship program, to clarify the program and ensure it remains as an attractive program for students and young professionals wanting to learn more about free software, especially when it comes to policy, legal issues, and public awareness work.

About a year ago, we introduced some changes to our internship program which split the program in two parts: internships (for students) and traineeships (for everyone else). What we've seen in this period is that the applications for student internships has dropped significantly: most people apply for traineeships, and it's been confusing for applicants (and us!) to have two separate programs.

The changes we've now introduced bring these two programs together under one program again: internships. However, this new program will be open for students and others alike. Occasionally we may decide to announce internship openings specifically to students who are doing this as part of their education, but in either case, it will all be under the same program.

Another change we do is we will no longer (generally) accept unsolicited applications. They take a lot of time for us to process and in 99% of the cases, we can not successfully place the applicant. Rather, we will be announcing, several times per year, specific internship openings with fixed deadlines to apply.

Each internship we announce will include more explicit information about which work area of the FSFE this internship will touch upon, and what background we would like to see in a successful applicant.

If you'd like to read more about our internships, check out our internship page and subscribe to our newsletter which is where all internship positions will be announced in the future!

Burgers 2016

Planet Fsfe on Iain R. Learmonth | 10:20, Friday, 02 September 2016

Me and Ana travelled to Cambridge last weekend for the Debian UK BBQ. We travelled by train and it was a rather scenic journey. In the past, on long journeys, I’ve used APRS-IS to beacon my location and plot my route but I have recently obtained the GPS module for my Yaesu VX-8DE and I thought I’d give some real RF APRS a go this time.

While the APRS IGate coverage in the UK is a little disappointing, as is evidenced by the map, a few cool things did happen. I recieved a simplex APRS message from a radio amateur 2M0RRT with the text “test test IO86ML” (but unfortunately didn’t notice until we’d long passed by, sorry for not replying!) and quite a few of my packets, sent from a 5 watt handheld in Cambridge, were heard by the station M0BPQ-1 in North London (digipeated by MB7UM).

<figure> <figcaption>

My APRS Position Reports for the Debian UK BBQ 2016

</figcaption> </figure>

Some of you will know that since my trip to the IETF in Berlin, I’ve been without a working laptop (it got a bit smashed up on the plane). This also caused me to miss the reminder to renew the expiry on my old GPG key, which I have now retired. My new GPG key is not yet in the Debian keyring but can be found in the Tor Project keyring already. A request has already been filed to replace the key in the Debian keyring, and thanks to attendees at the BBQ, I have some nice shiny signatures on my new key. (I’ll get to returning those signatures as soon as I can.)

We’ve been making a lot of progress with Debian Live and the BBQ saw live- tasks being uploaded to the archive. This source package builds a number of metapackages, each of which configures a system to be used an a live CD for a different desktop environment. I would like for as much of the configuration that can be done within the image as possible to be done within the image, as this will help with reproducibility. A new upload for live-wrapper should be coming next week, and this version will allow these live-task-* packages to be used to build images for testing. I hope to have weekly builds for the live images running very soon.

As I’ve been without a computer for a while, I’m just getting back into things now, so expect that I’ll be slow to respond to communication for a while but I’ll also be making commits and uploads and trying to clear this backlog as quickly as I can (including my Tor Project backlog).

Arrival at FSFE Summit and QtCon 2016, Berlin

DanielPocock.com - fsfe | 08:46, Friday, 02 September 2016


The FSFE Summit and QtCon 2016 are getting under way at bcc, Berlin. The event comprises a range of communities, including KDE and VideoLAN and there are also a wide range of people present who are active in other projects, including Debian, Mozilla, GSoC and many more.

Talks

Today, some time between 17:30 and 18:30 I'll be giving a lightning talk about Postbooks, a Qt and PostgreSQL based free software solution for accounting and ERP. For more details about how free, open source software can make your life easier by helping keep track of your money, see my comparison of free, open source accounting software.

Saturday, at 15:00 I'll give a talk about Free Communications with Free Software. We'll look at some exciting new developments in this area and once again, contemplate the question can we hope to use completely free and private software to communicate with our friends and families this Christmas? (apologies to those who don't celebrate Christmas, the security of your communications is just as important too).

A note about the entrance fee...

There is an entry fee for the QtCon event, however, people attending the FSFE Summit are invited to attend by making a donation. Contact FSFE for more details and consider joining the FSFE Fellowship.

Thursday, 01 September 2016

Python tips and tricks

English — mina86.com | 22:07, Thursday, 01 September 2016

Python! My old nemesis, we meet again. Actually, we meet all the time, but despite that there are always things which I cannot quite remember how to do and need to look them up. To help with the searching, here there are collected in one post:

Converting a date into a timestamp

Neither datetime.date nor datetime.datetime has a method turning it into a timestamps, i.e. seconds since Unix epoch. Programmers might be tempted to use strftime('%s') or time.mktime but that may result in a disaster:

>>> import datetime, time
>>> now, ts = datetime.datetime.utcnow(), time.time()
>>> ts - int(now.strftime('%s'))
7200.1702790260315
>>> ts - time.mktime(now.timetuple())
7200.1702790260315

In both cases, expected value is around zero (since it measures time between utcnow and time calls) but in practice it is close to the value of local timezone offset (UTC+2 in the example). Both methods take local timezone into account which is why the offset is added.

The proper solution is calendar.timegm (which for some bizarre reason is not included in time module; yet another proof Python is much less ‘clean’ than its advocates argue):

import calendar, datetime, time
now, ts = datetime.datetime.utcnow(), time.time()
print ts - calendar.timegm(now.timetuple())  # prints: 0.308976888657

Re-rising Python exception preserving back-trace

import sys

exc_info = []

def fail(): assert False

def run():
    try: fail()
    except: exc_info[:] = sys.exc_info()

def throw(): raise exc_info[0], exc_info[1], exc_info[2]

def call_throw(): throw()

if not run(): call_throw()

When throw rises the exception again, back-trace will contain all frames that led up to having the exception caught in run:

$ python exc.py
Traceback (most recent call last):
  File "exc.py", line 15, in <module>
    if not run(): call_throw()
  File "exc.py", line 13, in call_throw
    def call_throw(): throw()
  File "exc.py", line 8, in run
    try: fail()
  File "exc.py", line 5, in fail
    def fail(): assert False
AssertionError

This is a bit like bare rise in except clause but performing the re-rising at arbitrary later time.

Flattening a list in Python

To turn a list of lists (or in more generic terms, an iterable of iterables) into a single iterable use one of:

def flatten(sequences):
    return itertools.chain.from_iterable(sequences)

def flatten(sequences):
    return (x for lst in sequences for x in lst)

(If you feel confused about nested comprehension don’t feel bad — it’s syntax is broken. The thing to remember is that you write a normal nested for-if-for-if-… sequence but then put the final statement at the beginning of the line instead of at the end).

If all elements are known to be lists or tuples, using sum may be considered easier:

def flatten(lists):
    return sum(lists, [])

def flatten(tuples):
    return sum(tuples, ())

Exploring Alternative Operating Systems – Inferno

David Boddie - Updates (Full Articles) | 21:23, Thursday, 01 September 2016

Continuing my exploration of alternative operating systems, which was inspired by the successfully funded EOMA68 Crowd Supply campaign, I have been looking at Inferno in more detail than I've managed to do so before. My aim has been to install the operating system on real hardware rather than relying on emulators since the eventual goal is to install it and try it out on my old laptop.

What is Inferno?

Inferno is a operating system in the Unix lineage that gets less attention than its predecessor, Plan 9 (site unavailable at the time of writing), and has never really enjoyed the success of other systems of a similar vintage. It's difficult to know if this is due to the initial choice of license, the architectural and conceptual choices, the current license, or perhaps it just isn't regarded as "cool" software.

Both Plan 9 and Inferno started as proprietary operating systems with license models that were common for the time, involving different licenses for different uses, the need to pay up front for the system, and limited rights for the users. By the time the third edition of Inferno was released, a license subscription would cost $300. The fourth edition, however, was released under the GNU General Public License (version 2), making it an interesting candidate for exploration.

As an aside, Plan 9 was available under the GPL-incompatible Lucent Public License until being re-released under the GNU GPL v2. This followed a previous license change that appeared to cause some issues for the OpenBSD community at the time.

In technical terms, Inferno differs from Plan 9 in one obvious way: software for the operating system is written in Limbo and compiled to bytecode for the Dis virtual machine rather than executed as native code. However, the virtual machine does have a Just In Time (JIT) compiler, so performance may be better than some other virtual machines. Still, if you want the power and flexibility of being able to compile and run native code within the OS, perhaps Inferno isn't for you.

Learning about Inferno

Rather than write notes about installation here, I forked/cloned the official Mercurial repository for Inferno on Bitbucket and enabled a Wiki for the repository. The Wiki is not a clone of the official repository's Wiki since that doesn't really contain much content. My plan is to write about installing and using "native" Inferno, intended for use on real hardware, rather than "hosted" Inferno which is run as an environment on another operating system.

There are plenty of resources already available about Inferno, of course, and I don't want to duplicate what others have already written. The documentation is a good starting point.

Another document worth looking at is Mechiel Lukkien's Getting Started with Inferno which summarises the key concepts behind the operating system, gives an overview of the root directory layout, describes installation of a hosted environment, and covers a few other topics related to Inferno. It also includes links to other online resources. Those familiar with GNU/Hurd's concept of translators may find the examples of Styx servers familiar.

It seems that people discover Inferno, perhaps via Plan 9, and go on write a few articles about it.

Finally, a lot of historical information about Inferno is held at Cat-v.org.

Code availability, or code contributions

free software - Bits of Freedom | 07:49, Thursday, 01 September 2016

Code availability, or code contributions

As many, I've been reading with some interest the discussion on the ksummit-discuss mailing list. A discussion which has included quite a lot of the active free software community and been summarized very well by LWN. Something I noted, which is also covered by the LWN article, is the difference between Linus Torvalds (Linux) and Matthew Garrett (FSF) in terms of their understanding of the goal of the GPL.

In this, I find myself siding much closer to Linus than with Matthew, but for a very practical reason. The gist of the argument is that Linus feels that one of the core points of the GPL is to enable a continuous flow of contributions back to the project1. Matthew (and I would presume the FSF) feels this is not what the users of the Linux kernel wants. He believes the users of the Linux kernel cares "about being able to fix their shitty broken piece of hardware when the vendor won't ship updates."2.

I'm a user of the Linux kernel, and I don't want to fix my shitty broken pieces of hardware. I want the vendor of my hardware to work closely with the Linux developer community to get fixes into the main stream kernel, so that I don't have to fix my shitty broken piece of hardware.

There are undoubtedly users for whom having access to source code is important because it enables them to do something they would otherwise not have been able to do. But I would posit that to the wider community, it's more important to get contributions, bugfixes, improvements, back into the software they are actually using, which in 99% of the cases (if not more!) will be the software the upstream vendors ships.

It's almost as if Matthew is talking about benefits for the 1% whereas Linus is aiming for the benefit of the 99%. Mind you, the two are not mutually exclusive, at least not completely. If you force (through legal action or otherwise) a company to release source code under the GPLv2, someone else can take that source code, shape it up, and contribute it upstream. The other way around also work: a company can be a part of the community and contribute source code directly to upstream so that downstream users can enjoy it.

I don't know about you, but I don't feel that force is ever a solution to anything. So I would emphasize community building over enforcement, every day of the week.

Wednesday, 31 August 2016

Managing your talks at the FSFEsummit with giggity

Matthias Kirschner's Web log - fsfe | 01:41, Wednesday, 31 August 2016

Currently I started organising my own conference schedule for the FSFEsummit, which takes place this week from 2 to 4 September. I am using "giggity" to keep track about my schedule. Although giggity was on my list for next years "I love Free Software day" I thought it might help some attendees, if I already write about it now.

The Berlin Congress Centre at Berlin Alexanderplatz

The BCC close to Berlin Alexanderplatz this morning; already waiting for Free Software folks.

Beside giggity there is also an official QtCon app. Unfortunately it is not available in F-Droid. Ekke–the author–told me he does not know how to get a Qt app into F-Droid. If someone of you knows that and would like to help, please get in contact with him.

But in general I prefer to have one program which I can use for several conferences, so I do not have to install a new app for every conference. With giggity that works most of the time.

The first thing you have to do is to install giggity from F-Droid. (In case you do not have F-Droid, and want to find out more about it, read the FSFE's leaflet about it.) Then add the QtCon schedule by clicking on the "plus" sign in the right upper corner. There you have to add https://conf.qtcon.org/en/qtcon/public/schedule.xml.

Afterwards you will have the conference schedule available in giggity. The only thing which I had to get used to is, that the time is on the horizontal line and the conference rooms on the vertical. When you like to listen to a talk, you can add it to My events by ticking Remind me. Just play a bit around with it. I really like it.

Looking forward to see many of you at the FSFEsummit!

Tuesday, 30 August 2016

Get your FSFE15 sticker

Matthias Kirschner's Web log - fsfe | 00:31, Tuesday, 30 August 2016

To celebrate the Free Software Foundation Europe's anniversary, Markus Meier created an "FSFE15" sticker for us. They are available in two sizes: 5cm and 9.5cm. You can get them from the FSFE's order page or at the FSFE Summit from 2 to 4 September in Berlin.

The FSFE15 sticker on my laptop

My laptop with the brand-new FSFE15 sticker.

Friday, 26 August 2016

IRC cloaks on Freenode available

free software - Bits of Freedom | 07:54, Friday, 26 August 2016

IRC cloaks on Freenode available

If you're a Freenode user, and an FSFE fellow or volunteer, perhaps regularly a part of our #fsfe channel, you now have the option of getting an FSFE-affiliated cloak. For those of you who don't know, an IRC cloak is a replacement for your IP number or domain name when people query your name on IRC, so instead of showing up as "jonas at 192.243.8.174" (as an example), you'd show up as "jonas at fsfe/jonas" 1.

I documented this here: https://wiki.fsfe.org/TechDocs/FellowshipServices#IRC

If you want a cloak, you need to have a NickServ registered name, an FSFE account (by being a Fellow or a volunteer2), and you need to let me know you want one: I'll communicate this to the Freenode staff which will activate your cloak.

As there may be quite a few cloak requests in the beginning, I'll collect requests and send to Freenode on a ~weekly basis. Get in touch at jonasatfsfe.org :)

  1. Please note though that a cloak while giving some privacy does not give you perfect privacy and it's generally not much of a problem for someone to figure out your IP address anyway.

  2. See https://wiki.fsfe.org/TechDocs/AccountCreation for more information.

Tuesday, 23 August 2016

St. Olaf's GNU

free software - Bits of Freedom | 18:41, Tuesday, 23 August 2016

St. Olaf's GNU

Over the past close to 20 years, I've spoken to, and interviewed, many hackers from the free software movements' early days. One of my pet projects, which I'm gathering more data for at the moment, is writing a book tentatively called "Untold stories & unsung heroes". Here's a draft excerpt of one of those stories. Do you want to read more? Well, I guess you should then fund me to write the book properly :-)


The VAX 11/780 at St. Olaf's College in Minnesota wasn't equipped to handle 30 concurrent users running GNU Emacs, and barely anyone used it. But after stumbling over a tape reel from the "Unix Users of Minnesota" group containing Emacs, Mike Haertel was hooked.

While Emacs had been around for a while, and the GNU C Compiler reached St. Olaf's on another tape reel in 1987, it was grep, join, and diff which became Mike's project for a few months in the summer of 1988 when he joined fellow St. Olaf's student Pete TerMaat as a programmer for the FSF in Boston. It wasn't uncommon for the FSF to hire programmers for the summer, and some of them stayed on for long after. With Mike working on the utilities, Pete took to maintaining gdb.

"After Richard hired me for the summer (based on one code sample that I had emailed him, and no interview, and no resume, and neverhaving met me!) he asked me if there was anybody else I could recommend who might be interested. I thought about the skills of the other programmers I knew from St. Olaf and decided to try to recruit Pete. He was a bit of a hard sell, but eventually I convinced him to write some sample code and send it to Richard," recalls Mike.

Without intention, St. Olaf's had become a recruitment ground for the FSF, at least for that summer. With Mike and Pete on board, Mike's old flatmate from St. Olaf's, David Mackenzie, wasn't far behind.

While working for the Environmental Defense Fund in Washington, DC, David took to rewriting some of the standard tools then available in 4.3BSD, improving them and resolving some of their limitations. As the Environmental Defense Fund wasn't in the software business, they saw no value in the tools David were writing and allowed him to contribute this back to the GNU Project.

"I was tired of editing Makefiles several times a day," David Mackenzie told me, as an interview with him turned to his later work. Having worked for the FSF as a summer intern, David continued contributing code and eventually found himself as the maintainer of a small mountain of packages.

Taking an hour here and there, as he could and as it was needed, autoconf eventually grew out of the necessity of making the GNU fileutils compile on several different flavors of Unix, each with their own specific needs.

"I'm glad people are still keeping that software useful and up to date, and I'm glad I don't have to do it," he says.

Sunday, 21 August 2016

Libreboot 20160818 released

Marcus's Blog | 18:49, Sunday, 21 August 2016

libreboot-simple-2.50x1.00

A new version of Libreboot has been released. It brings support for new boards and a lot of improvements for nearly any supported device. All grub ROMS now come with a preloaded Seabios in order to enable classic BIOS operations, like booting from a CD-ROM.

To update it first download the Util archive and extract it. It contains the flash script which later can be used to update your Libreboot installation (if you are already on Libreboot).

Now please check your ROM size:

dmidecode | grep ROM\ Size

and download the relevant ROM for your machine:

The easiest way is to extract the archive and to copy the ROM to the util folder. VESA ROMS are with a graphical boot splash, text are plain text.

Please note that the current release does no longer contain a prebuild version of flashrom, so you have to build it on your own.

You might need to install some build deps in advance, e.g. on Trisquel:

sudo apt-get install build-essential pciutils usbutils libpci-dev libusb-dev libftdi1 libftdi-dev zlib1g-dev

Afterwards you can run make in that folder and flashrom should be build.

In order to flash, simply run the flash command in the util root folder like:

sudo ./flash update yourrom.rom

where you have to replace yourrom.rom with the name of the ROM you want to flash.

If you see “Verifying flash… VERIFIED.” you can safely shut down your device and start it again, with a fresh version of Libreboot.

Trying out Fish – the Friendly Interactive Shell (the first few days)

Hook’s Humble Homepage | 12:00, Sunday, 21 August 2016

After over a decade of using Bash and almost a decade of using Zsh I decided to take Fish for a spin.

My reason for doing so was that even though Z-Shell is very powerful, its learning curve is (too) steep and the defaults are incredibly Spartan. Sometimes I get the feeling that while I know that to scratch my itch, there is an option or tweak in Zsh, at the same time I simply cannot be bothered to find and implement it – the relative effort is just too big. This holds true even after I (reluctantly) started using Oh My ZSH! to more easily manage my scripts.

Fish has been around for over a decade now and was created with user-friendliness and interactivity in mind (hence the name!). Another benefit that I found is that it comes with all its great features enabled by default.

Its major down-side is that it breaks the POSIX standard in certain aspects and is therefore not 100% compatible1 with other (POSIX) shells like Bash and Zsh. Fish claims that it breaks the standard only where and when it makes sense2.

With this series of posts, I do not intend to convince anyone to use Fish, but simply present my experience as I learn. It might just as well be that in the end of this excursion I will not make the switch.

Because getting used to a new environment – such as desktop or a shell – takes its time, I plan to write about it as a some sort of a diary. This, first, post speaks about the very first few days that I spent with Fish, the next one will explain my experiences after a few weeks, followed (hopefully) by one to provide an overview of some months of use.

Day 1 – Getting to know each other

Read the oldest article on Fish on LWN where Axel Liljencrantz, the original author of Fish, introduces (t)his new shell. Great and concise read!

Installed Fish on my Mageia laptop – easy peasy.

First impression:

  • the defaults seem nice and full-featured;
  • it is odd to not see my own familiar Zsh prompt any more, but the Fish default is not bad;
  • love Fish’s tab completion – I might need some more getting used to, but it is very neat and powerful;
  • its syntax error highlighting and wild-card expansion previews are amazing;
  • ditto for the tab completion when searching for man pages;
  • help command is great and even context-aware, but I am not sure yet if I like that it opens documentation in the browser;
  • open command is very useful if you cannot remember which programme you need to open a specific file (or are simply lazy) – e.g. open my_doc.pdf on my KDE desktop will automatically open the PDF up in Okular.

The thing I miss most on my Zsh prompt is that I wrote it to be VCS-aware. Luckily a quick search showed that (at least for Git) this is very much possible to do. So far the two most promising HowTos seem to be the one by Mariusz Smykuła and an older one by Martin Klepsch.

I just stumbled across Oh My Fish! – the Fish equivalent of Oh My ZSH! Since I am not a fan of huge bundles of plugins and like knowing what my scripts do, I will try to avoid it for as long as possible, but I thought it worth mentioning here.

Surprise of the day – fish_config

I found the following command by monkeying around and pressing tab on random strings to see how it works.

hook@hermes:~
➤ fish_config

To my huge surprise it opens up a page in the web browser where you can view and edit settings of your Fish. It even includes prompt theme previews and a colour picker. Wow … simply wow!

For now I only played around with it a bit and just changed the prompt.

At this stage, I am already considering making trying Fish as a default shell (at least for a while). Maybe tomorrow, we shall see …

Day 2 – Climbing the easy learning curve

Started reading the tutorial – basically everything that I stumbled upon so far is covered there. Plus tonnes more of course! I found it just the right length for a tutorial – not too short, not too long.

The tab completion is great. I found some occasions where it beat Zsh, but also a few where it is not on par with Zsh yet. At the moment I would call it a tie.

There is a difference between using single and double quotes (as in other shells), and the difference between them is apparent as soon as you type a variable. Namely the colour of the $ sign in front of the variable remains the same as the rest of the string if it is simply used literally (in single quotes); and changes to contrast the rest of the string, if the variable in the string will be substituted/expanded for its value.

Yesterday I complained about tab completion missing for a few commands that I use. Today I found out about the fish_update_completions command, which does exactly what you would expect – it scans the manpages for commands and creates tab completion patterns for them.

I tried to set Fish as a my (normal user’s) default shell on Mageia, but could not find the option in its official settings GUI and when I tried to set it via chsh I was presented with the following error:

chsh: "/usr/bin/fish" is not listed in /etc/shells.
Use chsh -l to see list.

There is a bug report for this already upstream, but it considers it as a downstream/packaging bug. Consequently, I opened up a bug report on Mageia’s bugzilla.

Day 3 – Setting Fish as default

Even after Fish scanned the man pages, I noticed that for a few commands it still does not provide tab completion. Luckily it seems that this is programmable and I can add what I need later (and maybe even send it upstream).

Making Fish my default on Mageia

The bug I opened up yesterday, was immediately fixed3 by the Mageia community – great job!

Before that I already manually added /usr/bin/fish to /etc/shells and made it my default. I like it enough already to try it for real. And if I get fed up, I can just make Zsh my default again.

Caveat: The following command that you will find in Fish’s tutorial will not work on Mageia and does not for other shells either:

chsh -s `which fish`

This is because which (on Mageia) finds the shells in /usr/bin/, while in /etc/shells they are stored with their /bin/ paths. This seems to caused by /bin being just a symlink to /usr/bin.

The correct CLI way (on Mageia) is therefore:

hook at hermes on pts/3 in ~
{ # }: cat /etc/shells
/bin/bash
/bin/csh
/bin/sh
/bin/tcsh
/bin/zsh
/usr/bin/dash
/bin/fish
hook at hermes on pts/3 in ~
{ # }: chsh -s /bin/fish

Making Fish my default on Jolla SailfishOS

In order to use Fish under Jolla’s SailfishOS, you need to install both Fish and Man-DB from the OpenRepos/Warehouse. Man-DB is not strictly necessary, but if you do not install it, you will miss useful hints and some tab completion.

After that you can simply run:

[nemo@Jolla ~]$ chsh -s /usr/local/bin/fish

… and enter your phone’s dev/admin/root password.

Do not forget to (re)run fish_update_completions to update the tab completion.

Day 4 – Serious use of Fish

Almost entirely got used to the tab completion and arrow key history. It is pretty intuitive. I thought I would miss the Z-Shell in this regard, but I really do not. There are a few commands still missing, but these seem to be easy to generate or write by hand.

Scripting

Started migrating some aliases from my [Zsh dotfiles][zsh-dot] into Fish functions and found it very simple and intuitive using function (in-line functions creator), funcsave (save a function, if you like it for future use) and funced (in-line saved functions editor). The fact that you do not have to dive into a separate directory and open a text editor, but can simply create a function where you are as soon as you need one, actually makes quite the difference in the user experience.

Already created a few scripts/aliases with function and funcsave:

function ttlog --description 'Shortcut for editing GTimeLog'
  vim ~/.gtimelog/timelog.txt
end
function tree --description 'Tree with colours and (by default) just two levels deep'
        command tree -C -L 2 $argv
end

The scripting syntax in Fish is different from most other popular shells, but I found it very intuitive to write new scripts and aliases. But …

Disappointment of the day – scripting GnuPG in Fish

In contrast, I did not manage to migrate the following one though due to the fact that gpg-agent --daemon --enable-ssh-support produces an output that includes variables in the form of VARIABLE=content, which Fish fails to parse.

#!/bin/zsh
pkill gpg-agent
if [ -x /usr/bin/gpg-agent ]; then
        eval $(/usr/bin/gpg-agent --daemon --enable-ssh-support)
fi

I am sure this is solvable – if nothing else through the use of sed, but for the time being I am OK with simply keeping that script separate.

Still, for the first few days, I am very happy with how this trial is progressing and am looking forward to how it will continue in the next weeks ☺

hook out → here fishy fishy fishy…


  1. Well, truth be told, even Bash is not 100% POSIX compliant out of the box, but can be very easily configured to be. 

  2. It is not alone in the stance that it might slowly be time to break dated standards where it makes sense and hopefully replace it with more modern ones. Of notable examples with a similar idea Nix & NixOS pop to mind, as well as Lennart Poettering. 

  3. Kudos to Jani “wally_” Välimaa

Friday, 19 August 2016

Conferences I will attend in the next two months

Hook’s Humble Homepage | 20:00, Friday, 19 August 2016

It feels like I just came back from Berlin1 and I have to go back again. With so much happening in the next two months, it seems fitting to join others and write a blog report of my future whereabouts.

I am going to Akademy 2016!

1-9 September I will be in Berlin for the QtCon/FSFE Summit/Akademy/…, where together with Polina Malaja I am giving a short presentation on FSFE Legal Team & Legal Network – Yesterday, Today and Tomorrow” as well as moderate a panel “Legal entities for Free Software projects” at the FSFE Summit. During Akademy I am also hosting a BoF session on the topic of FOSS legal & licensing Q&A and if it will be already possible, I hope to gather some feedback from the community on the current draft of the FLA 2.0.

My Berlin trip will be interrupted 4-6 September when I have to fly (via Manchester) to Hebden Bridge to the Wuthering Bytes, where – or more specifically at Open for Business – I am presenting on the topic of “Effective FOSS adoption, compliance and governance – is it really that hard?”2. After that I will come back to Berlin to QtCon. It is a huge pity that the two conferences clash – I would love to stay the whole length of both!

As every year so far, 13-15 September I am going to Vienna together with Domen Savič (“the hacktivist”) and Lenart Kučić (“the investigative journalist”) to represent Slovenia as “the ICT lawyer” at the Regional Consultation on Internet Freedom – Gaining a Digital Edge3. This conference, ran by the OSCE and SHARE Foundation, has quite a unique set-up of mixing relevant lawyers, journalists and activists from the wider region.

Then after a short break I return to Berlin, at least 4-7 October for LinuxCon Europe, where Catharina Maracke and I will hold a BoF session on the topic of CA, CLA, CAA, DCO, FLAOMG!”, during which we want to clarify (m)any misconceptions regarding different contributor agreements. Again, if there will be interest we would be delighted to gather feedback on the draft FLA 2.0.

Update: added link to FSFE Summit panel & fixed date of LinuxCon.

hook out → see you in Berlin and Vienna! … egad, this will be quite a busy month!


  1. There was a meeting of The Center for the Cultivation of Technology that I attended. More on this later. 

  2. Hint: No, it is not ☺ 

  3. There is no website for it, but I found some small video summaries online from the 2013, 2014 and 2015 editions. 

Foreman's Ansible integration

Colors of Noise - Entries tagged planetfsfe | 09:16, Friday, 19 August 2016

Gathering from some recent discussions it seems to be not that well known that Foreman (a lifecycle tool for your virtual machines) does not only integrate well with Puppet but also with ansible. This is a list of tools I find useful in this regard:

  • The ansible-module-foreman ansible module allows you to setup all kinds of resources like images, compute resources, hostgroups, subnets, domains within Foreman itself via ansible using Foreman's REST API. E.g. creating a hostgroup looks like:

    - foreman_hostgroup:
        name: AHostGroup
        architecture: x86_64
        domain: a.domain.example.com
        foreman_host: "{{ foreman_host }}"
        foreman_user: "{{ foreman_user }}"
        foreman_pass: "{{ foreman_pw }}"
    
  • The foreman_ansible plugin for Foreman allows you to collect reports and facts from ansible provisioned hosts. This requires an additional hook in your ansible config like:

    [defaults]
    callback_plugins = path/to/foreman_ansible/extras/
    

    The hook will report to Foreman back after a playbook finished.

  • There are several options for creating hosts in Foreman via the ansible API. I'm currently using ansible_foreman_module tailored for image based installs. This looks in a playbook like:

    - name: Build 10 hosts
      foremanhost:
        name: "{{ item }}"
        hostgroup: "a/host/group"
        compute_resource: "hopefully_not_esx"
        subnet: "webservernet"
        environment: "{{ env|default(omit) }}"
        ipv4addr: {{ from_ipam|default(omit) }}"
        # Additional params to tag on the host
        params:
            app: varnish
            tier: web
            color: green
        api_user: "{{ foreman_user }}"
        api_password: "{{ foreman_pw }}"
        api_url: "{{ foreman_url }}"
      with_sequence:  start=1 end=10 format="newhost%02d"
    
  • The foreman_ansible_inventory is a dynamic inventory script for ansible that fetches all your hosts and groups via the Foreman REST APIs. It automatically groups hosts in ansible from Foreman's hostgroups, environments, organizations and locations and allows you to build additional groups based on any available host parameter (and combinations thereof). So using the above example and this configuration:

    [ansible]
    group_patterns = ["{app}-{tier}",
                      "{color}"]
    

    it would build the additional ansible groups varnish-web, green and put the above hosts into them. This way you can easily select the hosts for e.g. blue green deployments. You don't have to pass the parameters during host creation, if you have parameters on e.g. domains or hostgroups these are available too for grouping via group_patterns.

  • If you're grouping your hosts via the above inventory script and you use lots of parameters than having these displayed in the detail page can be useful. You can use the foreman_params_tab plugin for that.

There's also support for triggering ansible runs from within Foreman itself but I've not used that so far.

Thursday, 18 August 2016

EOMA68: The Campaign (and some remarks about recurring criticisms)

Paul Boddie's Free Software-related blog » English | 14:55, Thursday, 18 August 2016

I have previously written about the EOMA68 initiative and its objective of making small, modular computing cards that conform to a well-defined standard which can be plugged into certain kinds of device – a laptop or desktop computer, or maybe even a tablet or smartphone – providing a way of supplying such devices with the computing power they all need. This would also offer a convenient way of taking your computing environment with you, using it in the kind of device that makes most sense at the time you need to use it, since the computer card is the actual computer and all you are doing is putting it in a different box: switch off, unplug the card, plug it into something else, switch that on, and your “computer” has effectively taken on a different form.

(This “take your desktop with you” by actually taking your computer with you is fundamentally different to various dubious “cloud synchronisation” services that would claim to offer something similar: “now you can synchronise your tablet with your PC!”, or whatever. Such services tend to operate rather imperfectly – storing your files on some remote site – and, of course, exposing you to surveillance and convenience issues.)

Well, a crowd-funding campaign has since been launched to fund a number of EOMA68-related products, with an opportunity for those interested to acquire the first round of computer cards and compatible devices, those devices being a “micro-desktop” that offers a simple “mini PC” solution, together with a somewhat radically designed and produced laptop (or netbook, perhaps) that emphasises accessible construction methods (home 3D printing) and alternative material usage (“eco-friendly plywood”). In the interests of transparency, I will admit that I have pledged for a card and the micro-desktop, albeit via my brother for various personal reasons that also delayed me from actually writing about this here before now.

An EOMA68 computer card in a wallet

An EOMA68 computer card in a wallet (courtesy Rhombus Tech/Crowd Supply)

Of course, EOMA68 is about more than just conveniently taking your computer with you because it is now small enough to fit in a wallet. Even if you do not intend to regularly move your computer card from device to device, it emphasises various sustainability issues such as power consumption (deliberately kept low), long-term support and matters of freedom (the selection of CPUs that completely support Free Software and do not introduce surveillance backdoors), and device longevity (that when the user wants to upgrade, they may easily use the card in something else that might benefit from it).

This is not modularity to prove some irrelevant hypothesis. It is modularity that delivers concrete benefits to users (that they aren’t forced to keep replacing products engineered for obsolescence), to designers and manufacturers (that they can rely on the standard to provide computing functionality and just focus on their own speciality to differentiate their product in more interesting ways), and to society and the environment (by reducing needless consumption and waste caused by the upgrade treadmill promoted by the technology industries over the last few decades).

One might think that such benefits might be received with enthusiasm. Sadly, it says a lot about today’s “needy consumer” culture that instead of welcoming another choice, some would rather spend their time criticising it, often to the point that one might wonder about their motivations for doing so. Below, I present some common criticisms and some of my own remarks.

(If you don’t want to read about “first world” objections – typically about “new” and “fast” – and are already satisfied by the decisions made regarding more understandable concerns – typically involving corporate behaviour and licensing – just skip to the last section.)

“The A20 is so old and slow! What’s the point?”

The Allwinner A20 has been around for a while. Indeed, its predecessor – the A10 – was the basis of initial iterations of the computer card several years ago. Now, the amount of engineering needed to upgrade the prototypes that were previously made to use the A10 instead of the A20 is minimal, at least in comparison to adopting another CPU (that would probably require a redesign of the circuit board for the card). And hardware prototyping is expensive, especially when unnecessary design changes have to be made, when they don’t always work out as expected, and when extra rounds of prototypes are then required to get the job done. For an initiative with a limited budget, the A20 makes a lot of sense because it means changing as little as possible, benefiting from the functionality upgrade and keeping the risks low.

Obviously, there are faster processors available now, but as the processor selection criteria illustrate, if you cannot support them properly with Free Software and must potentially rely on binary blobs which potentially violate the GPL, it would be better to stick to a more sustainable choice (because that is what adherence to Free Software is largely about) even if that means accepting reduced performance. In any case, at some point, other cards with different processors will come along and offer faster performance. Alternatively, someone will make a dual-slot product that takes two cards (or even a multi-slot product that provides a kind of mini-cluster), and then with software that is hopefully better-equipped for concurrency, there will be alternative ways of improving the performance to that of finding faster processors and hoping that they meet all the practical and ethical criteria.

“The RasPi 3…”

Lots of people love the Raspberry Pi, it would seem. The original models delivered a cheap, adequate desktop computer for a sum that was competitive even with some microcontroller-based single-board computers that are aimed at electronics projects and not desktop computing, although people probably overlook rivals like the BeagleBoard and variants that would probably have occupied a similar price point even if the Raspberry Pi had never existed. Indeed, the BeagleBone Black resides in the same pricing territory now, as do many other products. It is interesting that both product families are backed by certain semiconductor manufacturers, and the Raspberry Pi appears to benefit from privileged access to Broadcom products and employees that is denied to others seeking to make solutions using the same SoC (system on a chip).

Now, the first Raspberry Pi models were not entirely at the performance level of contemporary desktop solutions, especially by having only 256MB or 512MB RAM, meaning that any desktop experience had to be optimised for the device. Furthermore, they employed an ARM architecture variant that was not fully supported by mainstream GNU/Linux distributions, in particular the one favoured by the initiative: Debian. So a variant of Debian has been concocted to support the devices – Raspbian – and despite the Raspberry Pi 2 being the first device in the series to employ an architecture variant that is fully supported by Debian, Raspbian is still recommended for it and its successor.

Anyway, the Raspberry Pi 3 having 1GB RAM and being several times faster than the earliest models might be more competitive with today’s desktop solutions, at least for modestly-priced products, and perhaps it is faster than products using the A20. But just like the fascination with MHz and GHz until Intel found that it couldn’t rely on routinely turning up the clock speed on its CPUs, or everybody emphasising the number of megapixels their digital camera had until they discovered image noise, such number games ignore other factors: the closed source hardware of the Raspberry Pi boards, the opaque architecture of the Broadcom SoCs with a closed source operating system running on the GPU (graphics processing unit) that has control over the ARM CPU running the user’s programs, the impracticality of repurposing the device for things like laptops (despite people attempting to repurpose it for such things, anyway), and the organisation behind the device seemingly being happy to promote a variety of unethical proprietary software from a variety of unethical vendors who clearly want a piece of the action.

And finally, with all the fuss about how much faster the opaque Broadcom product is than the A20, the Raspberry Pi 3 has half the RAM of the EOMA68-A20 computer card. For certain applications, more RAM is going to be much more helpful than more cores or “64-bit!”, which makes us wonder why the Raspberry Pi 3 doesn’t support 4GB RAM or more. (Indeed, the current trend of 64-bit ARM products offering memory quantities addressable by 32-bit CPUs seems to have missed the motivation for x86 finally going 64-bit back in the early 21st century, which was largely about efficiently supporting the increasingly necessary amounts of RAM required for certain computing tasks, with Intel’s name for x86-64 actually being at one time “Extended Memory 64 Technology“. Even the DEC Alpha, back in the 1990s, which could be regarded as heralding the 64-bit age in mainstream computing, and which arguably relied on the increased performance provided by a 64-bit architecture for its success, still supported 64-bit quantities of memory in delivered products when memory was obviously a lot more expensive than it is now.)

“But the RasPi Zero!”

Sure, who can argue with a $5 (or £4, or whatever) computer with 512MB RAM and a 1GHz CPU that might even be a usable size and shape for some level of repurposing for the kinds of things that EOMA68 aims at: putting a general purpose computer into a wide range of devices? Except that the Raspberry Pi Zero has had persistent availability issues, even ignoring the free give-away with a magazine that had people scuffling in newsagents to buy up all the available copies so they could resell them online at several times the retail price. And it could be perceived as yet another inventory-dumping exercise by Broadcom, given that it uses the same SoC as the original Raspberry Pi.

Arguably, the Raspberry Pi Zero is a more ambiguous follow-on from the Raspberry Pi Compute Module that obviously was (and maybe still is) intended for building into other products. Some people may wonder why the Compute Module wasn’t the same success as the earlier products in the Raspberry Pi line-up. Maybe its lack of success was because any organisation thinking of putting the Compute Module (or, these days, the Pi Zero) in a product to sell to other people is relying on a single vendor. And with that vendor itself relying on a single vendor with whom it currently has a special relationship, a chain of single vendor reliance is formed.

Any organisation wanting to build one of these boards into their product now has to have rather a lot of confidence that the chain will never weaken or break and that at no point will either of those vendors decide that they would rather like to compete in that particular market themselves and exploit their obvious dominance in doing so. And they have to be sure that the Raspberry Pi Foundation doesn’t suddenly decide to get out of the hardware business altogether and pursue those educational objectives that they once emphasised so much instead, or that the Foundation and its manufacturing partners don’t decide for some reason to cease doing business, perhaps selectively, with people building products around their boards.

“Allwinner are GPL violators and will never get my money!”

Sadly, Allwinner have repeatedly delivered GPL-licensed software without providing the corresponding source code, and this practice may even persist to this day. One response to this has referred to the internal politics and organisation of Allwinner and that some factions try to do the right thing while others act in an unenlightened, licence-violating fashion.

Let it be known that I am no fan of the argument that there are lots of departments in companies and that just because some do some bad things doesn’t mean that you should punish the whole company. To this day, Sony does not get my business because of the unsatisfactorily-resolved rootkit scandal and I am hardly alone in taking this position. (It gets brought up regularly on a photography site I tend to visit where tensions often run high between Sony fanatics and those who use cameras from other manufacturers, but to be fair, Sony also has other ways of irritating its customers.) And while people like to claim that Microsoft has changed and is nice to Free Software, even to the point where people refusing to accept this assertion get criticised, it is pretty difficult to accept claims of change and improvement when the company pulls in significant sums from shaking down device manufacturers using dubious patent claims on Android and Linux: systems it contributed nothing to. And no, nobody will have been reading any patents to figure out how to implement parts of Android or Linux, let alone any belonging to some company or other that Microsoft may have “vacuumed up” in an acquisition spree.

So, should the argument be discarded here as well? Even though I am not too happy about Allwinner’s behaviour, there is the consideration that as the saying goes, “beggars cannot be choosers”. When very few CPUs exist that meet the criteria desirable for the initiative, some kind of nasty compromise may have to be made. Personally, I would have preferred to have had the option of the Ingenic jz4775 card that was close to being offered in the campaign, although I have seen signs of Ingenic doing binary-only code drops on certain code-sharing sites, and so they do not necessarily have clean hands, either. But they are actually making the source code for such binaries available elsewhere, however, if you know where to look. Thus it is most likely that they do not really understand the precise obligations of the software licences concerned, as opposed to deliberately withholding the source code.

But it may well be that unlike certain European, American and Japanese companies for whom the familiar regime of corporate accountability allows us to judge a company on any wrongdoing, because any executives unaware of such wrongdoing have been negligent or ineffective at building the proper processes of supervision and thus permit an unethical corporate culture, and any executives aware of such wrongdoing have arguably cultivated an unethical corporate culture themselves, it could be the case that Chinese companies do not necessarily operate (or are regulated) on similar principles. That does not excuse unethical behaviour, but it might at least entertain the idea that by supporting an ethical faction within a company, the unethical factions may be weakened or even eliminated. If that really is how the game is played, of course, and is not just an excuse for finger-pointing where nobody is held to account for anything.

But companies elsewhere should certainly not be looking for a weakening of their accountability structures so as to maintain a similarly convenient situation of corporate hypocrisy: if Sony BMG does something unethical, Sony Imaging should take the bad with the good when they share and exploit the Sony brand; people cannot have things both ways. And Chinese companies should comply with international governance principles, if only to reassure their investors that nasty surprises (and liabilities) do not lie in wait because parts of such businesses were poorly supervised and not held accountable for any unethical activities taking place.

It is up to everyone to make their own decision about this. The policy of the campaign is that the A20 can be supported by Free Software without needing any proprietary software, does not rely on any Allwinner-engineered, licence-violating software (which might be perceived as a good thing), and is merely the first step into a wider endeavour that could be conveniently undertaken with the limited resources available at the time. Later computer cards may ignore Allwinner entirely, especially if the company does not clean up its act, but such cards may never get made if the campaign fails and the wider endeavour never even begins in earnest.

(And I sincerely hope that those who are apparently so outraged by GPL violations actually support organisations seeking to educate and correct companies who commit such violations.)

“You could buy a top-end laptop for that price!”

Sure you could. But this isn’t about a crowd-funding campaign trying to magically compete with an optimised production process that turns out millions of units every year backed by a multi-billion-dollar corporation. It is about highlighting the possibilities of more scalable (down to the economically-viable manufacture of a single unit), more sustainable device design and construction. And by the way, that laptop you were talking about won’t be upgradeable, so when you tire of its performance or if the battery loses its capacity, I suppose you will be disposing of it (hopefully responsibly) and presumably buying something similarly new and shiny by today’s measures.

Meanwhile, with EOMA68, the computing part of the supposedly overpriced laptop will be upgradeable, and with sensible device design the battery (and maybe other things) will be replaceable, too. Over time, EOMA68 solutions should be competitive on price, anyway, because larger numbers of them will be produced, but unlike traditional products, the increased usable lifespans of EOMA68 solutions will also offer longer-term savings to their purchasers, too.

“You could just buy a used laptop instead!”

Sure you could. At some point you will need to be buying a very old laptop just to have a CPU without a surveillance engine and offering some level of upgrade potential, although the specification might be disappointing to you. Even worse, things don’t last forever, particularly batteries and certain kinds of electronic components. Replacing those things may well be a challenge, and although it is worthwhile to make sure things get reused rather than immediately discarded, you can’t rely on picking up a particular product in the second-hand market forever. And relying on sourcing second-hand items is very much for limited edition products, whereas the EOMA68 initiative is meant to be concerned with reliably producing widely-available products.

“Why pay more for ideological purity?”

Firstly, words like “ideology”, “religion”, “church”, and so on, might be useful terms for trolls to poison and polarise any discussion, but does anyone not see that expecting suspiciously cheap, increasingly capable products to be delivered in an almost conveyor belt fashion is itself subscribing to an ideology? One that mandates that resources should be procured at the minimum cost and processed and assembled at the minimum cost, preferably without knowing too much about the human rights abuses at each step. Where everybody involved is threatened that at any time their role may be taken over by someone offering the same thing for less. And where a culture of exploitation towards those doing the work grows, perpetuating increasing wealth inequality because those offering the services in question will just lean harder on the workers to meet their cost target (while they skim off “their share” for having facilitated the deal). Meanwhile, no-one buying the product wants to know “how the sausage is made”. That sounds like an ideology to me: one of neoliberalism combined with feigned ignorance of the damage it does.

Anyway, people pay for more sustainable, more ethical products all the time. While the wilfully ignorant may jeer that they could just buy what they regard as the same thing for less (usually being unaware of factors like quality, never mind how these things get made), more sensible people see that the extra they pay provides the basis for a fairer, better society and higher-quality goods.

“There’s no point to such modularity!”

People argue this alongside the assertion that systems are easy to upgrade and that they can independently upgrade the RAM and CPU in their desktop tower system or whatever, although they usually start off by talking about laptops, but clearly not the kind of “welded shut” laptops that they or maybe others would apparently prefer to buy (see above). But systems are getting harder to upgrade, particularly portable systems like laptops, tablets, smartphones (with Fairphone 2 being a rare exception of being something that might be upgradeable), and even upgradeable systems are not typically upgraded by most end-users: they may only manage to do so by enlisting the help of more knowledgeable relatives and friends.

I use a 32-bit system that is over 11 years old. It could have more RAM, and I could do the job of upgrading it, but guess how much I would be upgrading it to: 2GB, which is as much as is supported by the two prototyped 32-bit architecture EOMA68 computer card designs (A20 and jz4775). Only certain 32-bit systems actually support more RAM, mostly because it requires the use of relatively exotic architectural features that a lot of software doesn’t support. As for the CPU, there is no sensible upgrade path even if I were sure that I could remove the CPU without causing damage to it or the board. Now, 64-bit systems might offer more options, and in upgradeable desktop systems more RAM might be added, but it still relies on what the chipset was designed to support. Some chipsets may limit upgrades based on either manufacturer pessimism (no-one will be able to afford larger amounts in the near future) or manufacturer cynicism (no-one will upgrade to our next product if they can keep adding more RAM).

EOMA68 makes a trade-off in order to support the upgrading of devices in a way that should be accessible to people who are not experts: no-one should be dealing with circuit boards and memory modules. People who think hardware engineering has nothing to do with compromises should get out of their armchair, join one of the big corporations already doing hardware, and show them how it is done, because I am sure those companies would appreciate such market-dominating insight.

An EOMA68 computer card with the micro-desktop device

An EOMA68 computer card with the micro-desktop device (courtesy Rhombus Tech/Crowd Supply)

Back to the Campaign

But really, the criticisms are not the things to focus on here. Maybe EOMA68 was interesting to you and then you read one of these criticisms somewhere and started to wonder about whether it is a good idea to support the initiative after all. Now, at least you have another perspective on them, albeit from someone who actually believes that EOMA68 provides an interesting and credible way forward for sustainable technology products.

Certainly, this campaign is not for everyone. Above all else it is crowd-funding: you are pledging for rewards, not buying things, even though the aim is to actually manufacture and ship real products to those who have pledged for them. Some crowd-funding exercises never deliver anything because they underestimate the difficulties of doing so, leaving a horde of angry backers with nothing to show for their money. I cannot make any guarantees here, but given that prototypes have been made over the last few years, that videos have been produced with a charming informality that would surely leave no-one seriously believing that “the whole thing was just rendered” (which tends to happen a lot with other campaigns), and given the initiative founder’s stubbornness not to give up, I have a lot of confidence in him to make good on his plans.

(A lot of campaigns underestimate the logistics and, having managed to deliver a complicated technological product, fail to manage the apparently simple matter of “postage”, infuriating their backers by being unable to get packages sent to all the different countries involved. My impression is that logistics expertise is what Crowd Supply brings to the table, and it really surprises me that established freight and logistics companies aren’t dipping their toes in the crowd-funding market themselves, either by running their own services or taking ownership stakes and integrating their services into such businesses.)

Personally, I think that $65 for a computer card that actually has more RAM than most single-board computers is actually a reasonable price, but I can understand that some of the other rewards seem a bit more expensive than one might have hoped. But these are effectively “limited edition” prices, and the aim of the exercise is not to merely make some things, get them for the clique of backers, and then never do anything like this ever again. Rather, the aim is to demonstrate that such products can be delivered, develop a market for them where the quantities involved will be greater, and thus be able to increase the competitiveness of the pricing, iterating on this hopefully successful formula. People are backing a standard and a concept, with the benefit of actually getting some hardware in return.

Interestingly, one priority of the campaign has been to seek the FSF’s “Respects Your Freedom” (RYF) endorsement. There is already plenty of hardware that employs proprietary software at some level, leaving the user to merely wonder what some “binary blob” actually does. Here, with one of the software distributions for the computer card, all of the software used on the card and the policies of the GNU/Linux distribution concerned – a surprisingly awkward obstacle – will seek to meet the FSF’s criteria. Thus, the “Libre Tea” card will hopefully be one of the first general purpose computing solutions to actually be designed for RYF certification and to obtain it, too.

The campaign runs until August 26th and has over a thousand pledges. If nothing else, go and take a look at the details and the updates, with the latter providing lots of background including video evidence of how the software offerings have evolved over the course of the campaign. And even if it’s not for you, maybe people you know might appreciate hearing about it, even if only to follow the action and to see how crowd-funding campaigns are done.

No MariaDB MaxScale in Debian

Norbert Tretkowski | 06:00, Thursday, 18 August 2016

Last weekend I started working on a MariaDB MaxScale package for Debian, of course with the intention to upload it into the official Debian repository.

Today I got pointed to an article by Michael "Monty" Widenius he published two days ago. It explains the recent license change of MaxScale from GPL so BSL with the release of MaxScale 2.0 beta. Justin Swanhart summarized the situation, and I could not agree more.

Looks like we will not see MaxScale 2.0 in Debian any time soon...

Tuesday, 16 August 2016

Setting up a slightly more secure solution for home banking

Matthias Kirschner's Web log - fsfe | 12:04, Tuesday, 16 August 2016

Recently I helped some friends to setup a slightly more secure solution to do home banking than running it in your default Web browser. In a nutshell you setup a dedicated user under GNU/Linux. This user is then merely used to execute a Web browser dedicated solely for home banking. Through ssh you can start it with your default user.

Money and remittance slips

First of all, open a terminal and run adduser homebanking to add the new user. Afterwards just enter a password and confirm it.

Switch to the just created user with su homebanking and type cd to go to the user's home directory.

Create a new directory for ssh with mkdir .ssh.

Then you create the file .ssh/authorized_keys2 in which you paste the content of your own users public ssh key (often .ssh/id_rsa.pub).

Switch back to you local user and create a small shortcut sudo vi /usr/local/bin/homebanking-browser with the content:

#!/bin/bash
ssh -fX homebanking@vita chromium

You have to make it executable with chmod u+x /usr/local/bin/homebanking-browser.

The first time you run homebanking-browser, you should do it from a terminal, as you will be asked to approve the SSH key.

That is it. As some friends use Gnome, and I first had to figure it out how to add it in the applications menu here those steps as well: Go to the applications folder cd /usr/share/applications/ and create a file sudo vi homebanking-browser.desktop with the following content:

[Desktop Entry]
Name=Homebanking-Browser
GenericName=Browser for Homebanking
Comment=Use this browser to do your bank transfers
Exec=/usr/local/bin/homebanking-browser
Icon=terminal
Terminal=false
Type=Application
Categories=Office;
StartupNotify=true

Once logged out of Gnome and in again, you should be able to run homebanking-browser from your application launcher.

If you know about better solutions which work under all GNU/Linux distributions, please let me know.

Saturday, 13 August 2016

Free Software PDF-Campaign: It isn’t over until it is over

André on Free Software » English | 20:02, Saturday, 13 August 2016

After FSFE decided to officially end the PDF-campaign, the situation in the Netherlands still asked for action.

Having translated the Free Software PDF Readers-story into Dutch, I recently stumbled upon a proprietary PDF-ad on Digid.nl. This is a website of the Dutch government and it’s log-in technology is used by a lot of websites in this country – both from the government as well as non-government like health insurance companies.

By e-mail I politely asked the authorities to withdraw the ad. In two weeks I was phoned by a friendly civil servant who informed me that they removed the ad.

Friday, 12 August 2016

Free Software in the Berlin election programs

Matthias Kirschner's Web log - fsfe | 00:04, Friday, 12 August 2016

On 18 September there will be local elections in Berlin. I skimmed through some party programs and made notes, which I shared with our Berlin group. Our group sent official questions to the parties and will publish an evaluation before the election. In this short article I will focus on the positive aspects in the programs for software freedom.

The Berlin Abgeordnetenhaus around 1900

Since 2009 the FSFE has sent official questions to political parties in an election. Our teams–consisting of dedicated volunteers and staffers–sent out questions to parties and individual politicians, compared them with the party program and other positions, and wrote evaluations. If this activity passed by you, have a look at the FSFE's ask your candidates page

The Berlin SPD is not mentioning Free Software/Open Source Software, but on a positive side they have a clear position on open educational resources (OER), and we hope they will consider the FSFE's position paper on OER once they are implementing it.

Mit dem Open Educational Resources (OER)-Projekt entwickeln wir freie Lehrmittel, die durch Lernende und Lehrende kostenfrei genutzt und verbreitet werden können. Ab dem Schuljahr 2017/18 werden wir den flächendeckenden Austausch von OER-Mitteln ermöglichen sowie den Anteil der verfügbaren OER-Lehrmittel weiter ausbauen.

In the election program of the Berlin CDU I did not find anything about Free Software. There was one point about end-to-end encryption, but considering the context of that statement, I am not sure if they really mean end-to-end – and it could be be implemented with proprietary software. Unfortunately, in past German elections the CDU was not that keen to add anything about Free Software in their election programs.

The Green Party Berlin, which in most national elections was in favour of Free Software, also wants to push for Free Software in the public administration in Berlin. They write that the "foundation in future needs to be Free Software, which provides independence, security, and more flexibility".

Berlin braucht schnell eine IT-Strategie für die Verwaltung mit vorausschauender Planung und einem zentral koordinierten Controlling. Grundlage muss zukünftig Open-Source-Software sein – sie schafft Unabhängigkeit, Sicherheit und eine größere Flexibilität.

For the electronic electronic records the standard should be Free Software:

Auf Basis von einheitlichen Arbeitsprozessen führen wir die elektronische Aktenführung (eAkte) verbindlich ein. Dabei machen wir den Einsatz von offener und freier Software sowie ressourcenschonender Informationstechnik (Green IT) bei hoher IT-Sicherheit zum Standard.

And they also mention that the use of Free Software goes without saying.

E-Government, vernetzte Mobilität und digitale Steuerungstechniken sowie der Einsatz von freier Software werden immer mehr zur Selbstverständlichkeit.

Die Linke Berlin has a clear position on Free Software. They say that "the public administration should switch to Free Software".

Auch die technische Arbeitsplatzausstattung muss endlich den modernen Anforderungen gerecht werden. Das betrifft sowohl die Hardware als auch die Software. Die öffentliche Verwaltung soll auf Open Source Software umgestellt werden. Vor allem braucht es endlich die technischen Voraussetzungen, um den Service einer digitalen Verwaltung umsetzen zu können.

Furthermore they also encourage the use of OER and also highlight the need for Free Software in education.

In diesem Bereich darf das Feld nicht privaten Unternehmen, Verlagen und Bildungsanbietern überlassen werden. Wir setzen uns für die Nutzung und die Erstellung offener Lehr- und Lernmaterialien (Open Educational Ressources, OER) sowie den Einsatz von Open-Source Software ein.

Although the German Pirate Parties wrote a lot about Free Software in the past, this year the Berlin Pirate Party just has one point about planned obsolescence. This is connected with Digital Restriction Management, as often software is used for that. The want to make sure that the country Berlin and the consumer protection associations get enough resources to better work against the planned obsolescence, that there is a label indicating how long the product and the software support for it will last, and that Berlin administrations just procures products with that label.

Hersteller werden angehalten, ihre Produkte mit einem voraussichtlichem “Haltbarkeitszeitraum” zu versehen. Dieses Haltbarkeitsdatum beinhaltet sowohl das physische als auch softwareseitige Leben eines Produktes. Auch müssen die Supportzeiträume (Softwareupdates etc.) auf dieser Kennzeichnung angegeben werden. [...] Wir setzen uns weiterhin dafür ein, dass die öffentliche Hand nur Produkte mit einer von der Verbraucherzentrale überprüften Haltbarkeit erwirbt.

A very strong demand for Free Software comes from the Berlin FDP, which is a real improvement for the party. They write that "Open Source Software offers many advantages for numerous services" and they demand "the use of Open Source Software". They add that "the use of proprietary software in the public administration should only be possible in well-founded individual cases".

Open Source Software bietet Vorteile bei zahlreichen Diensten. Wir fordern deshalb den Einsatz von Open-Source-Software. Der Einsatz proprietärer Software in der Verwaltung sollte nur in begründeten Einzelfällen stattfinden.

I did not yet manage to have a look at the election programs of the other parties. If you find something about Free Software there, or if I missed something in the ones mentioned above, please let me know.

There is still a lot of work leading up to the election. Our Berlin group will evaluate the answers to our questions. We will continue to add positive examples on our wiki page. And as with past elections, I hope that people in Berlin will ask their candidates questions about Free Software and share the answers with us.

If you do not have time to evaluate election programs, or ask your candidates questions, you can support such activities by becoming a sustaining member of the FSFE!


Update 2016-08-12 11:46: The Friedrichshain Kreuzberg Pirate Party pointed out to me that they have a seperate program, which contains several points about Free Software: The district should switch to Free Software for operating system, programs, and specialised programs for the public administration.

Der Bezirk soll Pilotbezirk für die Umstellung der kompletten Verwaltung auf quelloffene Software werden. Dazu gehören Betriebssystem, Anwendungsprogramme, Fachverfahren sowie der dafür erforderliche Support.

Employees in the district exchange should be trained with GnuPG to communicate encrypted with the citizens in the district.

Die Mitarbeiter*innen der Bezirksämter sollen durch Integration der notwendigen Software (GNUPG) und Schulung in die Lage versetzt werden, nach Möglichkeit verschlüsselt mit den Bewohnern des Bezirks zu kommunizieren.

Furthermore adult education centre should offer courses about encryption for files, and e-mail, Free Software, passwords and secure Web surfing.

Wir drängen darauf, dass die Volkshochschule als Bildungsort endlich Angebote für Fragen rund um Verschlüsselung von Dateien und Rechnern, sicheren Mailverkehr, freie Software, Passwörter und sicheres Surfen etabliert.

Thursday, 11 August 2016

The woes of WebVTT

free software - Bits of Freedom | 09:18, Thursday, 11 August 2016

Now this is a story all about how My life got flipped-turned upside down And I'd like to take a minute Just sit right there I'll tell you how I came to know the woes of Web-VTT.
The woes of WebVTT

It started with the FSFE's 15th birthday1, for which we produced an awesome video2. The FSFE has a truly amazing translators team (which you can and should join3) which quickly produced subtitles for the video in German, English, Greek, Spanish, French, Italian, Dutch, Turkish and Albanian.

Having a healthy distrust of too much client side JavaScript on our web pages, we were delighted to get an opportunity to try WebVTT, a standard for displaying subtitles together with a video with the web browser's native support. The standard has been around since ~2011 and most browsers do support it today.

As it turns out, it was not as easy as I thought to just add the WebVTT files to our HTML code. Here are the peculiarities I encountered along the way of making this work:

  1. WebVTT files look remarkably like SRT files (another standard) and our translators first created SRT files which we then turned into VTT files. In practice, it looks as if you just need to add a "WEBVTT" header to the file and be done. But the devil is in the details: turns out that the time format is slightly different in SRT and VTT. Where a time format of "00:00:05,817" works well in SRT, the equivalent in VTT should be "00:00:05.817". See the difference? It took me a while to see it.

  2. We're serving the video and the VTT files from a different server than our normal web server. This wasn't a problem with the video, but it turns out that browsers seem to behave differently when loading the VTT files and the fact that they were on a different server triggered cross origin requests which by default are not allowed for security reasons. Updating our download server to allow cross origin requests, and updating the HTML code to be explicit about the cross origin request solved that problem.

  3. Not only do you need to allow cross origin requests, but the VTT files need to have exactly the right mime type (text/vtt) when served from the web server. If they don't, the subtitles will be silently but ruthlessly ignored.

  4. As I mentioned, we have a lot of translations of the subtitles, but how do you actually get to see the different subtitles? Turns out most browsers just have a "ON" and "OFF" for subtitles. Here's what I learned:

    • Internet Explorer and Safari both present the user with a menu to select between the different subtitles. All good.
    • Chrome and Opera don't allow you to change the subtitle but instead makes a best-effort match between the browser language and the available subtitles. Okay, but why not let the user change too?
    • Firefox sucks. I'm sorry Mozillians, but from everything I've read, it seems Firefox ignored the browser language when selecting a subtitle and instead picks the default subtitle. It also does not allow you to change which subtitle is used, instead relying on the web page including some JavaScript and CSS to style it's own menu.

So much for standards :-)


Interview with Karl Berry (from 2002)

free software - Bits of Freedom | 08:11, Thursday, 11 August 2016

Interview with Karl Berry (from 2002)

Karl, you've been involved in one or another way with the GNU Project for quite some time now, working mostly with various aspects of TeX. Could you begin by telling us how you got involved in computers and what the first computer was that you wrote software for?

In 1977, my mom and I lived in California for the year so she could get a master's degree (in music) at Stanford. So I went to a middle school in Palo Alto, and they, being a fairly advanced school, had a programming class in BASIC and several yellow-paper teletypes in a back room. I never saw the actual computer -- it was at the high school, I think. The first program I specifically remember writing in that class was factorial computation (the FOR loop assignment), and it didn't take many iterations before the program halted with an overflow error :).

We moved back to our regular home in a tiny upstate New York town near the Canadian border the next year. As a faculty kid, I was lucky to be able to hang out in the local college's computer room, where they had a modem connection (using an acoustic coupler) to the Honeywell 66/40 computer at Dartmouth College, running a homegrown operating system originally named DTSS.

For the GNU Project, you've been working with TeX and fonts. Indeed, you currently maintain the texinfo and fontutils packages. But how did you first learn of the GNU Project, and what was it that led up to you being accepted into the FSF as an employee for some years?

I first learned about GNU in 1985 or so, via Paul Rubin (phr), who knew rms. I met rms shortly after that when he visited California and stayed overnight with my then-partner Kathryn Hargreaves and I.

We moved to Massachusetts to study with Bob Morris at the University of Massachusetts at Boston. We invited rms to give a talk at umb, and generally stayed in touch. After we got our degrees a couple of years later, we asked rms if he would hire us -- and he did! (After looking at some sample programs.) We were pysched.

During your time with the FSF, you also helped out to get Autoconf to successfully configure TeX, which I'm sure was no small task, and you also did some work on Ghostscript. What's your strongest memory from working with the FSF?

Although those projects were fun and valuable, my strongest technical memory is actually working on regex.c. POSIX was standardizing regular expressions at the time, and we implemented about 10 different drafts as the committee came out with new ones, while keeping compatibility with Emacs and all the other programs that used it. It was a nightmare. We ended up with regex.c having as many lines of debugging statements as actual code, just so we could understand what it was doing.

I've since looked at a bunch of other regex packages and it seems basically impossible to implement the regular expressions we've grown used to in any reasonable way.


My strongest nontechnical memory is rms's vision of free software and how clearly he communicated it and how strongly he held (and holds) to it. It was and is an inspiration to me.

What was it that got you interested in TeX?

Typography and letterform design have been innately interesting to me for as long as I can remember. In the 1980's, TeX and Metafont were just hitting their stride, and Kathryn and I designed and typeset numerous books and other random items with them. Don Knuth's projects are always fascinating on many levels, and it was natural to get pulled in.

Another thing you've been working on is web2c, which I'm sure that most people have never heard of, let alone know anything about even if they've heard something or another about it. Could you venture into the depth of knowledge and enlighten us?

Web2c is the core of the Unix TeX distribution, which comprises the actual tex',mf', and other programs developed as part of Knuth's projects at Stanford. Knuth wrote TeX in "web", his system for so-called literate programming, in this case a combination of TeX and Pascal. The result can be (has been) printed as a book as well as compiled into a binary.

Web2c (named web-to-c at the time) converts these web sources into C. It was originally written by Tom Rokicki, based on the original change files for the Pascal TeX on Unix, which were written by Howard Trickey and Pavel Curtis. Web2c was later extended substantially by Tim Morgan. I maintained it for a number of years in the 1990's, and Olaf Weber is the current maintainer.

The GNU Project has taken a lot of heat for using info documentation instead of standard manpages or later, DocBook or some other system for documentation. When did the GNU Project start using texinfo and what was the motivation? Do you have any comments on the newer systems for maintaining documentation?

rms invented Texinfo in 1985 or so, based on a print-only predecessor called BoTeX, which had antecedents in Bolio (at MIT) and Scribe (at CMU). At that time, there was no comparable system (as far as I know) that supported printable and on-line manuals from the same source.

Of course man pages existed, but I don't think anyone claims that man pages are a substitute for a full manual. Even Kernighan, Ritchie, and Thompson wrote troff documents to supplement the man pages, for information that doesn't fit into the NAME/SYNOPSIS/DESCRIPTION format.

Man pages certainly have their place, and despite being officially secondary in the GNU project, almost all GNU distributions do include man pages. There is a GNU program called help2man which can create respectable man pages from --help output, thus alleviating the burden of maintaining a separate source.

As far as DocBook and other XML-based systems go, I have nothing against them, but I think that Texinfo source is friendlier to authors. XML gets awfully verbose, in my experience. I've also noticed that O'Reilly books never contain internal references to explicit page numbers, just whole chapters or sections; I don't know where the deficiency lies, but it amuses me.

It seems to me that the ad hoc nature of Texinfo offends the people who like to create standards. If what you want to do is write reasonable documentation, Texinfo will get the job done with a minimum of fuss.

On a related note, people have occasionally suggested that the Info format is outdated now and we should just use HTML. I still find Info useful because I can read documentation without leaving Emacs. It is also handy to have manuals in (essentially) plain text, which HTML is not.

When you left the FSF as an employee, where did you go and what have you been up to these latest years? What do you work with today, and what does your future plans look like?

Aside from continuing to volunteer for the FSF, I worked as a programmer, system administrator, webmaster, release engineer, and various other odd computer jobs at Interleaf, Harvard, and now Intuit, due mostly to Paul English, a good friend I met at UMB. A significant part of all my jobs has been to install and maintain GNU and other free software, which has made me happy.

I expect to be able to leave my current job this fall and devote more time to volunteer work and my family.

What other hobbies, besides computing, do you have? I know you find Antarctica interesting. Would you mind sharing why? Any plans to try to visit some day?

Other hobbies - I read anything I can get my hands on (some favorite authors: Andrew Vachss, Barbara Kingsolver, Daniel Quinn, Stephen King, Terry Tempest Willams), and attempt to play piano (Bach and earlier, with some Pärt thrown in).

As for Antarctica, its untouched nature is what appealed to me most, although of course that quality has sadly diminished as human population continues to explode. I have no plans to visit there since tourism is very destructive to its fragile ecology (not to mention it is <emph>cold</emph>!).

And finally, I must ask you to convey one of your favourite recipies to us (and no, it can not be sour cream chocolate chip cake or chocolate chip cookies with molasses). :-)

Ok, how about some dinner to go before the desserts: Hungarian pork chops (with apologies to the vegetarians in the crowd). First we start with a little note on paprika courtesy of Craig Claiborne (author of the New York Times cookbooks):

It is ruefully true that American cooks by and large have only the most pallid conception of what PAPRIKA is. The innocuous powder which most merchants pass on to their customers as paprika has slightly more character than crayon or chalk.

Any paprika worthy of the name has an exquisite taste and varies in strength from decidedly hot to pleasantly mild but with a pronounced flavor.

The finest paprika is imported from Hungary and logically enough it is called Hungarian paprika or rose paprika. This is available packaged in the food shops of most first-rank department stores and fine food specialty shops. It is also available in bulk in Hungarian markets. [Not having any Hungarian markets in Salem, we get it from the bulk section of the organic grocery stores around here ... I don't know for a fact that it's from Hungary but it's definitely got more character than a Crayola :)]


Here's the recipe:

  • 6 pork chops
  • salt & pepper
  • 3 tbsp butter
  • 1/2 cup onion, chpped
  • 1 clove garlic, minced
  • pinch of thyme
  • 1 bay leaf
  • 3/4 cup chicken stock or dry white wine [we use a chicken bouillon cube, sorry craig]
  • 1 cup sour cream [best with "full fat"]
  • 1 tbsp paprika [or to taste, we usually use about 1/2 tbsp]

  1. Trim the fat from the chops [or not, we don't :)]. Sprinkle the meat with salt and pepper and saute in the butter in a skillet. [Takes about 15min on our stove, at medium heat; I usually get both sides just starting to brown. I chop up onion and stuff for step 2 while waiting.]
  2. Add the onion, garlic, thyme, and bay leaf and saute over medium-high heat until the chops are well browned on both sides. [15-20min, this is most of the cooking.]
  3. Lower the heat [quite a bit, but more than simmer; 2-3 on our stove] and add the chicken stock or wine [or bouillon cubed water in our case]. Cover and cook 30min. [I turn them over halfway through.]
  4. Remove the chops to a warm serving platter and keep warm. Reduce the pan liquid by half by boiling [or whatever seems appropriate, sometimes I don't need to boil anything away, sometimes I do]. Discard the bay leaf [or not].
  5. Add the sour cream and paprika to the skillet and heat thoroughly but do not boil [maybe 5min at medium heat]. Pour the sauce over the meat and serve hot.


We make rice on the side and use the sauce for both. Our best meal.

Wednesday, 10 August 2016

The FSFE's 15th Anniversary Video

Matthias Kirschner's Web log - fsfe | 11:48, Wednesday, 10 August 2016

As you might have read, the FSFE is celebrating its 15th anniversary this year at the FSFE Summit. I am already looking forward to meet many of you in Berlin from 2-4 September for the celebration.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/uK3eyOHcc7E" width="560"></iframe>

Beside the preparations for the summit we worked together with BrandAnimate and today published a 40 second video about the FSFE (for people who prefer those services or who like to embed the videos on their websites we also uploaded them on vimeo and youtube to make sharing easier.) We hope it will help you to explain Free Software to your colleagues, friends, and family. Our suggestion for the hashtag is #FSFE15.

At the moment of writing we have subtitles for Albanian, Dutch, English, French, German, Greek, Italian, Spanish, and Turkish. If your language is not covered yet, please download one of the subtitle files, and sent a translation to our translation team.

Furthermore we updated the FSFE's timeline to give an overview of our achievements since 2001. Feedback about the timeline is also highly welcome.

Interview with Chet Ramey (from 2002)

free software - Bits of Freedom | 11:42, Wednesday, 10 August 2016

Interview with Chet Ramey (from 2002)

In 2002--2003 I did a serious of interviews with people from the free software community, focusing on some of the members of our community who had been around since the very early days. One of the interviews I did was with Chet Ramey, one of the early contributors and maintainers of Bash. Here is that interview.

Two software projects you've been involved in over the years, and continue to maintain for the GNU Project, are bash and readline. How did you get involved in writing those?

Well, it started back in 1989.

Brian Fox had begun writing bash and readline (which was not, at that time, a separate library) the year before, when he was an employee of the FSF. The story, as I recall it, was that a volunteer had come forward and offered to write a Bourne shell clone. After some time, he had produced nothing, so RMS directed Brian to write a shell. Stallman said it should take only a couple of months.

At that time, in 1989, I was looking around for a better shell for the engineers in my department to use on a set of VAXstations and (believe it or not) some IBM RTs running the original version of AIX. Remember, back then, there was nothing else but tcsh. There were no free Bourne shell clones. Arnold Robbins had done some work, based on work by Doug Gwyn and Ron Natalie at the Ballistics Research Laboratory, to add things like editing and job control to various versions of the Bourne shell, but those changes obviously required a licensed copy to patch. Ken Almquist was in the midst of writing ash, but that had not seen the light of day either. Charles Forsyth's PD clone of the V7 shell, which became the basis of the PD ksh, was available, but little-known, and certainly didn't have the things I was looking for: command-line editing with emacs key bindings and job control.

I picked up a copy of bash from somewhere, I can't remember where, and started to hack on it. Paul Placeway, then at Ohio State (he was the tcsh maintainer for many years), had gotten a copy and was helping Brian with the readline redisplay code, and I got a copy with his changes included. (If you look at the tcsh source, the BSD `editline' library, and the readline source, you can tell they all share a common display engine ancestor.) I took that code, made job control work, fixed a bunch of bugs, and (after taking a deep breath) sent the changes off to Brian. He seemed impressed, incorporated the fixes into his copy, and we began corresponding.

When Brian first released bash to the world, I fixed all the bugs people reported and fed the fixes back to Brian. We started working together as more or less co-maintainers, since Brian was looking for someone to take over some of the work. I wrote a bunch of code and the first cut of the manual page.

After a while, Brian moved onto other things, and bash development stalled. I still needed to support bash for my department, and I liked working on it, so I produced several `CWRU' bash releases. Brian came back into the picture for a while, so we folded our versions together, resulting in bash-1.10. By the time bash-1.14 was released, Brian had eased out of the development, and I took over.

Bash is based on the POSIX Shell and Utilities Standard. But this standard was published in late 1992 after six years of work, several years after the first bash. How much ideas from bash went into the final POSIX standard, and how does the development of the standard continue today?

There were contributions from both directions while the POSIX standard (at least 1003.2) was being developed. RMS was a balloting member of the IEEE 1003.2 working group, and I believe that Brian was too, at least while he was employed by the FSF. I was on several of the mailing lists, and contributed ideas and reviews of proposals floated on the lists before being formalized.

Bash had test implementations of many of the newer features from POSIX.2 drafts, starting with about draft 9 of the original 1003.2, and I think that the feedback we provided, both as implementors and users, was useful to the balloting group. We didn't agree with all of the standard's decisions; that's why bash has a `posix mode' to this day.

The standards process continued with The Open Group and their Single Unix Specification. The IEEE chartered several working groups to modify the standards and produce new ones, and eventually in the fullness of time, the two groups combined their efforts. The result of that collaboration is the just-released SUS v3, which is also known as Posix-2002. Check out http://www.opengroup.org for more details and a web-accessible version of the standard.

When it comes to readline, it is developed together with bash, but has found a tremendous number of uses outside of bash. Like bash, it is licensed under the GPL. Have you ever been approached by a proprietary software developer wanting to have readline relicensed to that they could use it in their software?

Sure, all the time. It's not just commercial entities, either. Groups who release their code under a BSD or X11-style license have inquired as well.

The most common request is, of course, that the LGPL replace the GPL. I let the FSF handle those. Their point of view, as I understand it, is that for libraries without other compatible implementations, that GPL is the preferred license.

Of course, bash isn't the only Free Software shell available. The shell zsh, for example, prides itself in combining features from bash, ksh and tcsh. What do you think of those other shells?

zsh, pd-ksh, ash/BSD sh -- they're all fine. The projects have slightly different focuses, and in the true spirit of free/open-source software, scratch a slightly different itch.

There's been a fair bit of feature migration between the different shells, if little actual code. I'm on the zsh mailing lists, and the zsh maintainers, I'm sure, are on the bash lists. (After I answered a question about bash on a zsh list, there was a follow-up asking in a rather suspicious tone whether or not "we are being monitored".) I've also seen a statement to the effect that "the only reason zsh has [feature x] is that bash has it". The bash programmable completion code was inspired by zsh. I've certainly picked up ideas from pdksh also.

Since bash is so much used in scripting on various platforms, many people depend on its functionality remaining compatible between revisions. Has there ever been a time when you were tempted to introduce a new feature, but decided against it for the sake of compatibility?

Once or twice. People have been burned by new features in the past -- for instance, when the $"..." and $'...' quoting features were introduced. The previous behavior of those constructs was undefined, but Red Hat used them in some of their system scripts and relied on the old behavior. That's why the COMPAT file exists.

Usually, though, the question is whether or not old bugs should be fixed because people have come to depend on them. The change in the grammar that made {...;} work like POSIX says it should was one such change; changing the filename expansion code to correctly use locales was another. I generally fall on the side of correctness.

Other times I've hesitated to introduce new features because I was not sure that the approach was correct, and did not want to have to support the syntax or feature in perpetuity. Exporting array variables is one.

A final example is the DEBUG trap. Bash currently implements that with the ksh88 semantics, which say that it's run after each simple command. I've been kicking around the idea of moving to the ksh93 semantics, which cause the trap to be run before each simple command. Existing bash debuggers, including the rudimentary one in the distribution, rely on the current behavior.

My preferred method of changing bash is to make the new behavior the default, and have a configuration option for the old behavior.

Speaking of new features, how is the development going with bash today? Do you introduce new features regularly, or has bash reached a state of maturity?

Bash development is proceeding, and new features are still being
introduced. Reaching a state of maturity and introducing new features are not mutually exclusive. The new features are both user-visible and internal reimplementations.

I picked up my release methodology from the old UCB CSRG, which did BSD Unix, and introduce new features with even-numbered releases and emphasize stability in odd-numbered releases.

Programmable completion and the arithmetic for statement, for instance, were introduced in bash-2.04. Since the release of bash-2.05a, I've added shell arithmetic in the largest integer size a machine supports, support for multibyte characters in bash and readline, and the ability to use arbitrary date formats in prompt expansions. There will be some more new stuff before the next release.

Two programs you've authored yourself, but that are not part of the GNU project, are CE (Chet's Editor) and the EMM mail manager. Could you tell us briefly what prompted you to write them and if you continue to develop them today?

Well, let's see.

ce is a descendent of David Conroy's original microemacs. I got
interested in editors after I took a Software Engineering class my sophomore year in college in which we studied the `ed' clone from Kernighan and Plaugher's Software Tools in Pascal. That interest was reinforced when I got into a discussion with a campus recruiter who was interviewing me my senior year about the best data structure to use to represent the text in an editor. It's got it's own following, especially among Unix users at CWRU, but I've never publicized it. I still work on ce, and it's up to version 4.4. I have a version 4.5 ready for release, if I ever get around to doing the release engineering.

emm was my Master's project. It was driven by the desire to make a mail manager similar in features to the old MM-20 mail system, which I remember fondly from my DEC-20 days. I wanted to bring that kind of functionality to Unix, since at the time I started on it my department contained a number of old DEC-20 people moving to Unix. It's got a few users, though I don't think anyone's ever fetched it from ftp.cwru.edu. I still develop it, and find myself much more productive with it than any other mail program. There were days I'd get thousands of mail messages a day -- thankfully, in the past -- and it's easy to manage that volume of mail once you're used to emm.

When was your first interaction with computers and what was it about them that made you get involved in writing your own code for them?

Believe it or not, I really didn't use computers until I got to
college. And when I got there, I really didn't have any intention of concentrating in computing. I met the right people, I guess, and they were influential. At that time, lots of guys in my fraternity were studying Computer Engineering, and I found it fascinating. Still do.

You currently live in Cleveland, Ohio where you work at Case Western Reserve University. This, incidently, is also where you graduated with a B.S. in Computer Engineering and from where you received your Master's in Computer Science almost nine years ago. If I calculate this correctly, you've now worked about 13 years at CWRU. What's so great about working there that has made you stay all these years?

It's been longer than that. Counting my time as a student, I've been working for essentially the same department for 17 years.

I guess what I find attractive about working at CWRU has to do with the university environment: the informality, the facilities available at a university, and the academic community. The people I work with are great, too, and a bunch of them have been at CWRU for years and years.

One of your spare time hobbies is sports. Everything from sailing to golfing. No, wait, golfing is one of sports you really dislike. Why's that?

It just has no appeal for me. I can't think of a sport I find more boring (well, there's curling, but nobody knows what that is). I'm sure people feel the same way about sports I like, too. Think about how you'd feel about watching an America's Cup yacht race on television.

Sports you do enjoy though includes sailing, basketball and football. Did you ever get around to participating in them yourself?

I still play basketball once or twice a week. I used to play three or four times a week, back before I had back surgery, but haven't been able to find the time to play that much since.

I do like to sail, but I don't own a boat (a sailboat is a big hole in the water you pour money into). I had a chance to crew for someone last summer, but things never came together.

Between your two kids, work, bash, readline, five cats, a dog, fishes and sports, do you ever find some spare time or do you have some trick to share with us about how you manage to find time for everything you like?

No, I have no magic to share. I have the same struggles to find time to do everything as everybody else. I have it easy compared to my wife, though. I'm amazed that she packs as much into her days as she does.

Finally, where did you go for vacation last year and what's your plans for this years vacation?

Last year? Let's see. I actually went on two vacations.

On the first, I helped chaperon a group of high school students on a trip to Europe. Every other year, my sister-in-law, a high-school Latin teacher in Pennsylvania, gets together with the German teacher at her school and takes a group of students to Italy and Germany. We've also been to Austria, Switzerland, France, and England on these trips. They're both early-to-bed, early-to-rise people, so I get to handle the late-night incidents, like the time a student broke her hotel room key off in the door, or when one plugged in a hair drier and blew the fuses for an entire hotel floor.

The second was a family vacation. My parents, brothers, sister-in-law, niece and nephew, and my wife and kids spent a week in a rented house in North Carolina. That was the first family vacation we'd had in a long time, and it was great.

This year? I'm not sure.

Liberate your Reviews

Marcus's Blog | 09:40, Wednesday, 10 August 2016

I guess all of you have once written a product review on popular platforms like Yelp, IMDB, TripAdvisor, Amazon.com, Goodreads or Quora. Have you ever asked yourself who owns it? Yes, it’s you and now you can finally gain back control over it.

Just install the Free Your Stuff! Chrome extension and access all of your reviews on the supported platforms. You can decide to either download them or share them under a Free License.

lib.reviews

With lib.reviews this has been taken one step further. It is a free and open platform for reviewing absolutely anything, in any language.

If you want to get started, just stick with the Chrome extension. lib.reviews currently requires an invitation code, which you could be requested e.g. via IRC #libreviews on irc.freenode.net.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English — mina86.com  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet Fsfe on Iain R. Learmonth  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog