Planet Fellowship (en)

Friday, 26 August 2016

IRC cloaks on Freenode available

free software - Bits of Freedom | 07:54, Friday, 26 August 2016

IRC cloaks on Freenode available

If you're a Freenode user, and an FSFE fellow or volunteer, perhaps regularly a part of our #fsfe channel, you now have the option of getting an FSFE-affiliated cloak. For those of you who don't know, an IRC cloak is a replacement for your IP number or domain name when people query your name on IRC, so instead of showing up as "jonas at 192.243.8.174" (as an example), you'd show up as "jonas at fsfe/jonas" 1.

I documented this here: https://wiki.fsfe.org/TechDocs/FellowshipServices#IRC

If you want a cloak, you need to have a NickServ registered name, an FSFE account (by being a Fellow or a volunteer2), and you need to let me know you want one: I'll communicate this to the Freenode staff which will activate your cloak.

As there may be quite a few cloak requests in the beginning, I'll collect requests and send to Freenode on a ~weekly basis. Get in touch at jonasatfsfe.org :)

  1. Please note though that a cloak while giving some privacy does not give you perfect privacy and it's generally not much of a problem for someone to figure out your IP address anyway.

  2. See https://wiki.fsfe.org/TechDocs/AccountCreation for more information.

Tuesday, 23 August 2016

St. Olaf's GNU

free software - Bits of Freedom | 18:41, Tuesday, 23 August 2016

St. Olaf's GNU

Over the past close to 20 years, I've spoken to, and interviewed, many hackers from the free software movements' early days. One of my pet projects, which I'm gathering more data for at the moment, is writing a book tentatively called "Untold stories & unsung heroes". Here's a draft excerpt of one of those stories. Do you want to read more? Well, I guess you should then fund me to write the book properly :-)


The VAX 11/780 at St. Olaf's College in Minnesota wasn't equipped to handle 30 concurrent users running GNU Emacs, and barely anyone used it. But after stumbling over a tape reel from the "Unix Users of Minnesota" group containing Emacs, Mike Haertel was hooked.

While Emacs had been around for a while, and the GNU C Compiler reached St. Olaf's on another tape reel in 1987, it was grep, join, and diff which became Mike's project for a few months in the summer of 1988 when he joined fellow St. Olaf's student Pete TerMaat as a programmer for the FSF in Boston. It wasn't uncommon for the FSF to hire programmers for the summer, and some of them stayed on for long after. With Mike working on the utilities, Pete took to maintaining gdb.

"After Richard hired me for the summer (based on one code sample that I had emailed him, and no interview, and no resume, and neverhaving met me!) he asked me if there was anybody else I could recommend who might be interested. I thought about the skills of the other programmers I knew from St. Olaf and decided to try to recruit Pete. He was a bit of a hard sell, but eventually I convinced him to write some sample code and send it to Richard," recalls Mike.

Without intention, St. Olaf's had become a recruitment ground for the FSF, at least for that summer. With Mike and Pete on board, Mike's old flatmate from St. Olaf's, David Mackenzie, wasn't far behind.

While working for the Environmental Defense Fund in Washington, DC, David took to rewriting some of the standard tools then available in 4.3BSD, improving them and resolving some of their limitations. As the Environmental Defense Fund wasn't in the software business, they saw no value in the tools David were writing and allowed him to contribute this back to the GNU Project.

"I was tired of editing Makefiles several times a day," David Mackenzie told me, as an interview with him turned to his later work. Having worked for the FSF as a summer intern, David continued contributing code and eventually found himself as the maintainer of a small mountain of packages.

Taking an hour here and there, as he could and as it was needed, autoconf eventually grew out of the necessity of making the GNU fileutils compile on several different flavors of Unix, each with their own specific needs.

"I'm glad people are still keeping that software useful and up to date, and I'm glad I don't have to do it," he says.

Sunday, 21 August 2016

Libreboot 20160818 released

Marcus's Blog | 18:49, Sunday, 21 August 2016

libreboot-simple-2.50x1.00

A new version of Libreboot has been released. It brings support for new boards and a lot of improvements for nearly any supported device. All grub ROMS now come with a preloaded Seabios in order to enable classic BIOS operations, like booting from a CD-ROM.

To update it first download the Util archive and extract it. It contains the flash script which later can be used to update your Libreboot installation (if you are already on Libreboot).

Now please check your ROM size:

dmidecode | grep ROM\ Size

and download the relevant ROM for your machine:

The easiest way is to extract the archive and to copy the ROM to the util folder. VESA ROMS are with a graphical boot splash, text are plain text.

Please note that the current release does no longer contain a prebuild version of flashrom, so you have to build it on your own.

You might need to install some build deps in advance, e.g. on Trisquel:

sudo apt-get install build-essential pciutils usbutils libpci-dev libusb-dev libftdi1 libftdi-dev zlib1g-dev

Afterwards you can run make in that folder and flashrom should be build.

In order to flash, simply run the flash command in the util root folder like:

sudo ./flash update yourrom.rom

where you have to replace yourrom.rom with the name of the ROM you want to flash.

If you see “Verifying flash… VERIFIED.” you can safely shut down your device and start it again, with a fresh version of Libreboot.

Friday, 19 August 2016

Conferences I will attend in the next two months

Hook’s Humble Homepage | 20:00, Friday, 19 August 2016

It feels like I just came back from Berlin1 and I have to go back again. With so much happening in the next two months, it seems fitting to join others and write a blog report of my future whereabouts.

I am going to Akademy 2016!

1-9 September I will be in Berlin for the QtCon/FSFE Summit/Akademy/…, where together with Polina Malaja I am giving a short presentation on FSFE Legal Team & Legal Network – Yesterday, Today and Tomorrow” as well as moderate a panel “Legal entities for Free Software projects” at the FSFE Summit. During Akademy I am also hosting a BoF session on the topic of FOSS legal & licensing Q&A and if it will be already possible, I hope to gather some feedback from the community on the current draft of the FLA 2.0.

My Berlin trip will be interrupted 4-6 September when I have to fly (via Manchester) to Hebden Bridge to the Wuthering Bytes, where – or more specifically at Open for Business – I am presenting on the topic of “Effective FOSS adoption, compliance and governance – is it really that hard?”2. After that I will come back to Berlin to QtCon. It is a huge pity that the two conferences clash – I would love to stay the whole length of both!

As every year so far, 13-15 September I am going to Vienna together with Domen Savič (“the hacktivist”) and Lenart Kučić (“the investigative journalist”) to represent Slovenia as “the ICT lawyer” at the Regional Consultation on Internet Freedom – Gaining a Digital Edge3. This conference, ran by the OSCE and SHARE Foundation, has quite a unique set-up of mixing relevant lawyers, journalists and activists from the wider region.

Then after a short break I return to Berlin, at least 4-7 October for LinuxCon Europe, where Catharina Maracke and I will hold a BoF session on the topic of CA, CLA, CAA, DCO, FLAOMG!”, during which we want to clarify (m)any misconceptions regarding different contributor agreements. Again, if there will be interest we would be delighted to gather feedback on the draft FLA 2.0.

Update: added link to FSFE Summit panel & fixed date of LinuxCon.

hook out → see you in Berlin and Vienna! … egad, this will be quite a busy month!


  1. There was a meeting of The Center for the Cultivation of Technology that I attended. More on this later. 

  2. Hint: No, it is not ☺ 

  3. There is no website for it, but I found some small video summaries online from the 2013, 2014 and 2015 editions. 

Foreman's Ansible integration

Colors of Noise - Entries tagged planetfsfe | 09:16, Friday, 19 August 2016

Gathering from some recent discussions it seems to be not that well known that Foreman (a lifecycle tool for your virtual machines) does not only integrate well with Puppet but also with ansible. This is a list of tools I find useful in this regard:

  • The ansible-module-foreman ansible module allows you to setup all kinds of resources like images, compute resources, hostgroups, subnets, domains within Foreman itself via ansible using Foreman's REST API. E.g. creating a hostgroup looks like:

    - foreman_hostgroup:
        name: AHostGroup
        architecture: x86_64
        domain: a.domain.example.com
        foreman_host: "{{ foreman_host }}"
        foreman_user: "{{ foreman_user }}"
        foreman_pass: "{{ foreman_pw }}"
    
  • The foreman_ansible plugin for Foreman allows you to collect reports and facts from ansible provisioned hosts. This requires an additional hook in your ansible config like:

    [defaults]
    callback_plugins = path/to/foreman_ansible/extras/
    

    The hook will report to Foreman back after a playbook finished.

  • There are several options for creating hosts in Foreman via the ansible API. I'm currently using ansible_foreman_module tailored for image based installs. This looks in a playbook like:

    - name: Build 10 hosts
      foremanhost:
        name: "{{ item }}"
        hostgroup: "a/host/group"
        compute_resource: "hopefully_not_esx"
        subnet: "webservernet"
        environment: "{{ env|default(omit) }}"
        ipv4addr: {{ from_ipam|default(omit) }}"
        # Additional params to tag on the host
        params:
            app: varnish
            tier: web
            color: green
        api_user: "{{ foreman_user }}"
        api_password: "{{ foreman_pw }}"
        api_url: "{{ foreman_url }}"
      with_sequence:  start=1 end=10 format="newhost%02d"
    
  • The foreman_ansible_inventory is a dynamic inventory script for ansible that fetches all your hosts and groups via the Foreman REST APIs. It automatically groups hosts in ansible from Foreman's hostgroups, environments, organizations and locations and allows you to build additional groups based on any available host parameter (and combinations thereof). So using the above example and this configuration:

    [ansible]
    group_patterns = ["{app}-{tier}",
                      "{color}"]
    

    it would build the additional ansible groups varnish-web, green and put the above hosts into them. This way you can easily select the hosts for e.g. blue green deployments. You don't have to pass the parameters during host creation, if you have parameters on e.g. domains or hostgroups these are available too for grouping via group_patterns.

  • If you're grouping your hosts via the above inventory script and you use lots of parameters than having these displayed in the detail page can be useful. You can use the foreman_params_tab plugin for that.

There's also support for triggering ansible runs from within Foreman itself but I've not used that so far.

Thursday, 18 August 2016

EOMA68: The Campaign (and some remarks about recurring criticisms)

Paul Boddie's Free Software-related blog » English | 14:55, Thursday, 18 August 2016

I have previously written about the EOMA68 initiative and its objective of making small, modular computing cards that conform to a well-defined standard which can be plugged into certain kinds of device – a laptop or desktop computer, or maybe even a tablet or smartphone – providing a way of supplying such devices with the computing power they all need. This would also offer a convenient way of taking your computing environment with you, using it in the kind of device that makes most sense at the time you need to use it, since the computer card is the actual computer and all you are doing is putting it in a different box: switch off, unplug the card, plug it into something else, switch that on, and your “computer” has effectively taken on a different form.

(This “take your desktop with you” by actually taking your computer with you is fundamentally different to various dubious “cloud synchronisation” services that would claim to offer something similar: “now you can synchronise your tablet with your PC!”, or whatever. Such services tend to operate rather imperfectly – storing your files on some remote site – and, of course, exposing you to surveillance and convenience issues.)

Well, a crowd-funding campaign has since been launched to fund a number of EOMA68-related products, with an opportunity for those interested to acquire the first round of computer cards and compatible devices, those devices being a “micro-desktop” that offers a simple “mini PC” solution, together with a somewhat radically designed and produced laptop (or netbook, perhaps) that emphasises accessible construction methods (home 3D printing) and alternative material usage (“eco-friendly plywood”). In the interests of transparency, I will admit that I have pledged for a card and the micro-desktop, albeit via my brother for various personal reasons that also delayed me from actually writing about this here before now.

An EOMA68 computer card in a wallet

An EOMA68 computer card in a wallet (courtesy Rhombus Tech/Crowd Supply)

Of course, EOMA68 is about more than just conveniently taking your computer with you because it is now small enough to fit in a wallet. Even if you do not intend to regularly move your computer card from device to device, it emphasises various sustainability issues such as power consumption (deliberately kept low), long-term support and matters of freedom (the selection of CPUs that completely support Free Software and do not introduce surveillance backdoors), and device longevity (that when the user wants to upgrade, they may easily use the card in something else that might benefit from it).

This is not modularity to prove some irrelevant hypothesis. It is modularity that delivers concrete benefits to users (that they aren’t forced to keep replacing products engineered for obsolescence), to designers and manufacturers (that they can rely on the standard to provide computing functionality and just focus on their own speciality to differentiate their product in more interesting ways), and to society and the environment (by reducing needless consumption and waste caused by the upgrade treadmill promoted by the technology industries over the last few decades).

One might think that such benefits might be received with enthusiasm. Sadly, it says a lot about today’s “needy consumer” culture that instead of welcoming another choice, some would rather spend their time criticising it, often to the point that one might wonder about their motivations for doing so. Below, I present some common criticisms and some of my own remarks.

(If you don’t want to read about “first world” objections – typically about “new” and “fast” – and are already satisfied by the decisions made regarding more understandable concerns – typically involving corporate behaviour and licensing – just skip to the last section.)

“The A20 is so old and slow! What’s the point?”

The Allwinner A20 has been around for a while. Indeed, its predecessor – the A10 – was the basis of initial iterations of the computer card several years ago. Now, the amount of engineering needed to upgrade the prototypes that were previously made to use the A10 instead of the A20 is minimal, at least in comparison to adopting another CPU (that would probably require a redesign of the circuit board for the card). And hardware prototyping is expensive, especially when unnecessary design changes have to be made, when they don’t always work out as expected, and when extra rounds of prototypes are then required to get the job done. For an initiative with a limited budget, the A20 makes a lot of sense because it means changing as little as possible, benefiting from the functionality upgrade and keeping the risks low.

Obviously, there are faster processors available now, but as the processor selection criteria illustrate, if you cannot support them properly with Free Software and must potentially rely on binary blobs which potentially violate the GPL, it would be better to stick to a more sustainable choice (because that is what adherence to Free Software is largely about) even if that means accepting reduced performance. In any case, at some point, other cards with different processors will come along and offer faster performance. Alternatively, someone will make a dual-slot product that takes two cards (or even a multi-slot product that provides a kind of mini-cluster), and then with software that is hopefully better-equipped for concurrency, there will be alternative ways of improving the performance to that of finding faster processors and hoping that they meet all the practical and ethical criteria.

“The RasPi 3…”

Lots of people love the Raspberry Pi, it would seem. The original models delivered a cheap, adequate desktop computer for a sum that was competitive even with some microcontroller-based single-board computers that are aimed at electronics projects and not desktop computing, although people probably overlook rivals like the BeagleBoard and variants that would probably have occupied a similar price point even if the Raspberry Pi had never existed. Indeed, the BeagleBone Black resides in the same pricing territory now, as do many other products. It is interesting that both product families are backed by certain semiconductor manufacturers, and the Raspberry Pi appears to benefit from privileged access to Broadcom products and employees that is denied to others seeking to make solutions using the same SoC (system on a chip).

Now, the first Raspberry Pi models were not entirely at the performance level of contemporary desktop solutions, especially by having only 256MB or 512MB RAM, meaning that any desktop experience had to be optimised for the device. Furthermore, they employed an ARM architecture variant that was not fully supported by mainstream GNU/Linux distributions, in particular the one favoured by the initiative: Debian. So a variant of Debian has been concocted to support the devices – Raspbian – and despite the Raspberry Pi 2 being the first device in the series to employ an architecture variant that is fully supported by Debian, Raspbian is still recommended for it and its successor.

Anyway, the Raspberry Pi 3 having 1GB RAM and being several times faster than the earliest models might be more competitive with today’s desktop solutions, at least for modestly-priced products, and perhaps it is faster than products using the A20. But just like the fascination with MHz and GHz until Intel found that it couldn’t rely on routinely turning up the clock speed on its CPUs, or everybody emphasising the number of megapixels their digital camera had until they discovered image noise, such number games ignore other factors: the closed source hardware of the Raspberry Pi boards, the opaque architecture of the Broadcom SoCs with a closed source operating system running on the GPU (graphics processing unit) that has control over the ARM CPU running the user’s programs, the impracticality of repurposing the device for things like laptops (despite people attempting to repurpose it for such things, anyway), and the organisation behind the device seemingly being happy to promote a variety of unethical proprietary software from a variety of unethical vendors who clearly want a piece of the action.

And finally, with all the fuss about how much faster the opaque Broadcom product is than the A20, the Raspberry Pi 3 has half the RAM of the EOMA68-A20 computer card. For certain applications, more RAM is going to be much more helpful than more cores or “64-bit!”, which makes us wonder why the Raspberry Pi 3 doesn’t support 4GB RAM or more. (Indeed, the current trend of 64-bit ARM products offering memory quantities addressable by 32-bit CPUs seems to have missed the motivation for x86 finally going 64-bit back in the early 21st century, which was largely about efficiently supporting the increasingly necessary amounts of RAM required for certain computing tasks, with Intel’s name for x86-64 actually being at one time “Extended Memory 64 Technology“. Even the DEC Alpha, back in the 1990s, which could be regarded as heralding the 64-bit age in mainstream computing, and which arguably relied on the increased performance provided by a 64-bit architecture for its success, still supported 64-bit quantities of memory in delivered products when memory was obviously a lot more expensive than it is now.)

“But the RasPi Zero!”

Sure, who can argue with a $5 (or £4, or whatever) computer with 512MB RAM and a 1GHz CPU that might even be a usable size and shape for some level of repurposing for the kinds of things that EOMA68 aims at: putting a general purpose computer into a wide range of devices? Except that the Raspberry Pi Zero has had persistent availability issues, even ignoring the free give-away with a magazine that had people scuffling in newsagents to buy up all the available copies so they could resell them online at several times the retail price. And it could be perceived as yet another inventory-dumping exercise by Broadcom, given that it uses the same SoC as the original Raspberry Pi.

Arguably, the Raspberry Pi Zero is a more ambiguous follow-on from the Raspberry Pi Compute Module that obviously was (and maybe still is) intended for building into other products. Some people may wonder why the Compute Module wasn’t the same success as the earlier products in the Raspberry Pi line-up. Maybe its lack of success was because any organisation thinking of putting the Compute Module (or, these days, the Pi Zero) in a product to sell to other people is relying on a single vendor. And with that vendor itself relying on a single vendor with whom it currently has a special relationship, a chain of single vendor reliance is formed.

Any organisation wanting to build one of these boards into their product now has to have rather a lot of confidence that the chain will never weaken or break and that at no point will either of those vendors decide that they would rather like to compete in that particular market themselves and exploit their obvious dominance in doing so. And they have to be sure that the Raspberry Pi Foundation doesn’t suddenly decide to get out of the hardware business altogether and pursue those educational objectives that they once emphasised so much instead, or that the Foundation and its manufacturing partners don’t decide for some reason to cease doing business, perhaps selectively, with people building products around their boards.

“Allwinner are GPL violators and will never get my money!”

Sadly, Allwinner have repeatedly delivered GPL-licensed software without providing the corresponding source code, and this practice may even persist to this day. One response to this has referred to the internal politics and organisation of Allwinner and that some factions try to do the right thing while others act in an unenlightened, licence-violating fashion.

Let it be known that I am no fan of the argument that there are lots of departments in companies and that just because some do some bad things doesn’t mean that you should punish the whole company. To this day, Sony does not get my business because of the unsatisfactorily-resolved rootkit scandal and I am hardly alone in taking this position. (It gets brought up regularly on a photography site I tend to visit where tensions often run high between Sony fanatics and those who use cameras from other manufacturers, but to be fair, Sony also has other ways of irritating its customers.) And while people like to claim that Microsoft has changed and is nice to Free Software, even to the point where people refusing to accept this assertion get criticised, it is pretty difficult to accept claims of change and improvement when the company pulls in significant sums from shaking down device manufacturers using dubious patent claims on Android and Linux: systems it contributed nothing to. And no, nobody will have been reading any patents to figure out how to implement parts of Android or Linux, let alone any belonging to some company or other that Microsoft may have “vacuumed up” in an acquisition spree.

So, should the argument be discarded here as well? Even though I am not too happy about Allwinner’s behaviour, there is the consideration that as the saying goes, “beggars cannot be choosers”. When very few CPUs exist that meet the criteria desirable for the initiative, some kind of nasty compromise may have to be made. Personally, I would have preferred to have had the option of the Ingenic jz4775 card that was close to being offered in the campaign, although I have seen signs of Ingenic doing binary-only code drops on certain code-sharing sites, and so they do not necessarily have clean hands, either. But they are actually making the source code for such binaries available elsewhere, however, if you know where to look. Thus it is most likely that they do not really understand the precise obligations of the software licences concerned, as opposed to deliberately withholding the source code.

But it may well be that unlike certain European, American and Japanese companies for whom the familiar regime of corporate accountability allows us to judge a company on any wrongdoing, because any executives unaware of such wrongdoing have been negligent or ineffective at building the proper processes of supervision and thus permit an unethical corporate culture, and any executives aware of such wrongdoing have arguably cultivated an unethical corporate culture themselves, it could be the case that Chinese companies do not necessarily operate (or are regulated) on similar principles. That does not excuse unethical behaviour, but it might at least entertain the idea that by supporting an ethical faction within a company, the unethical factions may be weakened or even eliminated. If that really is how the game is played, of course, and is not just an excuse for finger-pointing where nobody is held to account for anything.

But companies elsewhere should certainly not be looking for a weakening of their accountability structures so as to maintain a similarly convenient situation of corporate hypocrisy: if Sony BMG does something unethical, Sony Imaging should take the bad with the good when they share and exploit the Sony brand; people cannot have things both ways. And Chinese companies should comply with international governance principles, if only to reassure their investors that nasty surprises (and liabilities) do not lie in wait because parts of such businesses were poorly supervised and not held accountable for any unethical activities taking place.

It is up to everyone to make their own decision about this. The policy of the campaign is that the A20 can be supported by Free Software without needing any proprietary software, does not rely on any Allwinner-engineered, licence-violating software (which might be perceived as a good thing), and is merely the first step into a wider endeavour that could be conveniently undertaken with the limited resources available at the time. Later computer cards may ignore Allwinner entirely, especially if the company does not clean up its act, but such cards may never get made if the campaign fails and the wider endeavour never even begins in earnest.

(And I sincerely hope that those who are apparently so outraged by GPL violations actually support organisations seeking to educate and correct companies who commit such violations.)

“You could buy a top-end laptop for that price!”

Sure you could. But this isn’t about a crowd-funding campaign trying to magically compete with an optimised production process that turns out millions of units every year backed by a multi-billion-dollar corporation. It is about highlighting the possibilities of more scalable (down to the economically-viable manufacture of a single unit), more sustainable device design and construction. And by the way, that laptop you were talking about won’t be upgradeable, so when you tire of its performance or if the battery loses its capacity, I suppose you will be disposing of it (hopefully responsibly) and presumably buying something similarly new and shiny by today’s measures.

Meanwhile, with EOMA68, the computing part of the supposedly overpriced laptop will be upgradeable, and with sensible device design the battery (and maybe other things) will be replaceable, too. Over time, EOMA68 solutions should be competitive on price, anyway, because larger numbers of them will be produced, but unlike traditional products, the increased usable lifespans of EOMA68 solutions will also offer longer-term savings to their purchasers, too.

“You could just buy a used laptop instead!”

Sure you could. At some point you will need to be buying a very old laptop just to have a CPU without a surveillance engine and offering some level of upgrade potential, although the specification might be disappointing to you. Even worse, things don’t last forever, particularly batteries and certain kinds of electronic components. Replacing those things may well be a challenge, and although it is worthwhile to make sure things get reused rather than immediately discarded, you can’t rely on picking up a particular product in the second-hand market forever. And relying on sourcing second-hand items is very much for limited edition products, whereas the EOMA68 initiative is meant to be concerned with reliably producing widely-available products.

“Why pay more for ideological purity?”

Firstly, words like “ideology”, “religion”, “church”, and so on, might be useful terms for trolls to poison and polarise any discussion, but does anyone not see that expecting suspiciously cheap, increasingly capable products to be delivered in an almost conveyor belt fashion is itself subscribing to an ideology? One that mandates that resources should be procured at the minimum cost and processed and assembled at the minimum cost, preferably without knowing too much about the human rights abuses at each step. Where everybody involved is threatened that at any time their role may be taken over by someone offering the same thing for less. And where a culture of exploitation towards those doing the work grows, perpetuating increasing wealth inequality because those offering the services in question will just lean harder on the workers to meet their cost target (while they skim off “their share” for having facilitated the deal). Meanwhile, no-one buying the product wants to know “how the sausage is made”. That sounds like an ideology to me: one of neoliberalism combined with feigned ignorance of the damage it does.

Anyway, people pay for more sustainable, more ethical products all the time. While the wilfully ignorant may jeer that they could just buy what they regard as the same thing for less (usually being unaware of factors like quality, never mind how these things get made), more sensible people see that the extra they pay provides the basis for a fairer, better society and higher-quality goods.

“There’s no point to such modularity!”

People argue this alongside the assertion that systems are easy to upgrade and that they can independently upgrade the RAM and CPU in their desktop tower system or whatever, although they usually start off by talking about laptops, but clearly not the kind of “welded shut” laptops that they or maybe others would apparently prefer to buy (see above). But systems are getting harder to upgrade, particularly portable systems like laptops, tablets, smartphones (with Fairphone 2 being a rare exception of being something that might be upgradeable), and even upgradeable systems are not typically upgraded by most end-users: they may only manage to do so by enlisting the help of more knowledgeable relatives and friends.

I use a 32-bit system that is over 11 years old. It could have more RAM, and I could do the job of upgrading it, but guess how much I would be upgrading it to: 2GB, which is as much as is supported by the two prototyped 32-bit architecture EOMA68 computer card designs (A20 and jz4775). Only certain 32-bit systems actually support more RAM, mostly because it requires the use of relatively exotic architectural features that a lot of software doesn’t support. As for the CPU, there is no sensible upgrade path even if I were sure that I could remove the CPU without causing damage to it or the board. Now, 64-bit systems might offer more options, and in upgradeable desktop systems more RAM might be added, but it still relies on what the chipset was designed to support. Some chipsets may limit upgrades based on either manufacturer pessimism (no-one will be able to afford larger amounts in the near future) or manufacturer cynicism (no-one will upgrade to our next product if they can keep adding more RAM).

EOMA68 makes a trade-off in order to support the upgrading of devices in a way that should be accessible to people who are not experts: no-one should be dealing with circuit boards and memory modules. People who think hardware engineering has nothing to do with compromises should get out of their armchair, join one of the big corporations already doing hardware, and show them how it is done, because I am sure those companies would appreciate such market-dominating insight.

An EOMA68 computer card with the micro-desktop device

An EOMA68 computer card with the micro-desktop device (courtesy Rhombus Tech/Crowd Supply)

Back to the Campaign

But really, the criticisms are not the things to focus on here. Maybe EOMA68 was interesting to you and then you read one of these criticisms somewhere and started to wonder about whether it is a good idea to support the initiative after all. Now, at least you have another perspective on them, albeit from someone who actually believes that EOMA68 provides an interesting and credible way forward for sustainable technology products.

Certainly, this campaign is not for everyone. Above all else it is crowd-funding: you are pledging for rewards, not buying things, even though the aim is to actually manufacture and ship real products to those who have pledged for them. Some crowd-funding exercises never deliver anything because they underestimate the difficulties of doing so, leaving a horde of angry backers with nothing to show for their money. I cannot make any guarantees here, but given that prototypes have been made over the last few years, that videos have been produced with a charming informality that would surely leave no-one seriously believing that “the whole thing was just rendered” (which tends to happen a lot with other campaigns), and given the initiative founder’s stubbornness not to give up, I have a lot of confidence in him to make good on his plans.

(A lot of campaigns underestimate the logistics and, having managed to deliver a complicated technological product, fail to manage the apparently simple matter of “postage”, infuriating their backers by being unable to get packages sent to all the different countries involved. My impression is that logistics expertise is what Crowd Supply brings to the table, and it really surprises me that established freight and logistics companies aren’t dipping their toes in the crowd-funding market themselves, either by running their own services or taking ownership stakes and integrating their services into such businesses.)

Personally, I think that $65 for a computer card that actually has more RAM than most single-board computers is actually a reasonable price, but I can understand that some of the other rewards seem a bit more expensive than one might have hoped. But these are effectively “limited edition” prices, and the aim of the exercise is not to merely make some things, get them for the clique of backers, and then never do anything like this ever again. Rather, the aim is to demonstrate that such products can be delivered, develop a market for them where the quantities involved will be greater, and thus be able to increase the competitiveness of the pricing, iterating on this hopefully successful formula. People are backing a standard and a concept, with the benefit of actually getting some hardware in return.

Interestingly, one priority of the campaign has been to seek the FSF’s “Respects Your Freedom” (RYF) endorsement. There is already plenty of hardware that employs proprietary software at some level, leaving the user to merely wonder what some “binary blob” actually does. Here, with one of the software distributions for the computer card, all of the software used on the card and the policies of the GNU/Linux distribution concerned – a surprisingly awkward obstacle – will seek to meet the FSF’s criteria. Thus, the “Libre Tea” card will hopefully be one of the first general purpose computing solutions to actually be designed for RYF certification and to obtain it, too.

The campaign runs until August 26th and has over a thousand pledges. If nothing else, go and take a look at the details and the updates, with the latter providing lots of background including video evidence of how the software offerings have evolved over the course of the campaign. And even if it’s not for you, maybe people you know might appreciate hearing about it, even if only to follow the action and to see how crowd-funding campaigns are done.

No MariaDB MaxScale in Debian

Norbert Tretkowski | 06:00, Thursday, 18 August 2016

Last weekend I started working on a MariaDB MaxScale package for Debian, of course with the intention to upload it into the official Debian repository.

Today I got pointed to an article by Michael "Monty" Widenius he published two days ago. It explains the recent license change of MaxScale from GPL so BSL with the release of MaxScale 2.0 beta. Justin Swanhart summarized the situation, and I could not agree more.

Looks like we will not see MaxScale 2.0 in Debian any time soon...

Tuesday, 16 August 2016

Setting up a slightly more secure solution for home banking

Matthias Kirschner's Web log - fsfe | 12:04, Tuesday, 16 August 2016

Recently I helped some friends to setup a slightly more secure solution to do home banking than running it in your default Web browser. In a nutshell you setup a dedicated user under GNU/Linux. This user is then merely used to execute a Web browser dedicated solely for home banking. Through ssh you can start it with your default user.

Money and remittance slips

First of all, open a terminal and run adduser homebanking to add the new user. Afterwards just enter a password and confirm it.

Switch to the just created user with su homebanking and type cd to go to the user's home directory.

Create a new directory for ssh with mkdir .ssh.

Then you create the file .ssh/authorized_keys2 in which you paste the content of your own users public ssh key (often .ssh/id_rsa.pub).

Switch back to you local user and create a small shortcut sudo vi /usr/local/bin/homebanking-browser with the content:

#!/bin/bash
ssh -fX homebanking@vita chromium

You have to make it executable with chmod u+x /usr/local/bin/homebanking-browser.

The first time you run homebanking-browser, you should do it from a terminal, as you will be asked to approve the SSH key.

That is it. As some friends use Gnome, and I first had to figure it out how to add it in the applications menu here those steps as well: Go to the applications folder cd /usr/share/applications/ and create a file sudo vi homebanking-browser.desktop with the following content:

[Desktop Entry]
Name=Homebanking-Browser
GenericName=Browser for Homebanking
Comment=Use this browser to do your bank transfers
Exec=/usr/local/bin/homebanking-browser
Icon=terminal
Terminal=false
Type=Application
Categories=Office;
StartupNotify=true

Once logged out of Gnome and in again, you should be able to run homebanking-browser from your application launcher.

If you know about better solutions which work under all GNU/Linux distributions, please let me know.

Saturday, 13 August 2016

Free Software PDF-Campaign: It isn’t over until it is over

André on Free Software » English | 20:02, Saturday, 13 August 2016

After FSFE decided to officially end the PDF-campaign, the situation in the Netherlands still asked for action.

Having translated the Free Software PDF Readers-story into Dutch, I recently stumbled upon a proprietary PDF-ad on Digid.nl. This is a website of the Dutch government and it’s log-in technology is used by a lot of websites in this country – both from the government as well as non-government like health insurance companies.

By e-mail I politely asked the authorities to withdraw the ad. In two weeks I was phoned by a friendly civil servant who informed me that they removed the ad.

Friday, 12 August 2016

Free Software in the Berlin election programs

Matthias Kirschner's Web log - fsfe | 00:04, Friday, 12 August 2016

On 18 September there will be local elections in Berlin. I skimmed through some party programs and made notes, which I shared with our Berlin group. Our group sent official questions to the parties and will publish an evaluation before the election. In this short article I will focus on the positive aspects in the programs for software freedom.

The Berlin Abgeordnetenhaus around 1900

Since 2009 the FSFE has sent official questions to political parties in an election. Our teams–consisting of dedicated volunteers and staffers–sent out questions to parties and individual politicians, compared them with the party program and other positions, and wrote evaluations. If this activity passed by you, have a look at the FSFE's ask your candidates page

The Berlin SPD is not mentioning Free Software/Open Source Software, but on a positive side they have a clear position on open educational resources (OER), and we hope they will consider the FSFE's position paper on OER once they are implementing it.

Mit dem Open Educational Resources (OER)-Projekt entwickeln wir freie Lehrmittel, die durch Lernende und Lehrende kostenfrei genutzt und verbreitet werden können. Ab dem Schuljahr 2017/18 werden wir den flächendeckenden Austausch von OER-Mitteln ermöglichen sowie den Anteil der verfügbaren OER-Lehrmittel weiter ausbauen.

In the election program of the Berlin CDU I did not find anything about Free Software. There was one point about end-to-end encryption, but considering the context of that statement, I am not sure if they really mean end-to-end – and it could be be implemented with proprietary software. Unfortunately, in past German elections the CDU was not that keen to add anything about Free Software in their election programs.

The Green Party Berlin, which in most national elections was in favour of Free Software, also wants to push for Free Software in the public administration in Berlin. They write that the "foundation in future needs to be Free Software, which provides independence, security, and more flexibility".

Berlin braucht schnell eine IT-Strategie für die Verwaltung mit vorausschauender Planung und einem zentral koordinierten Controlling. Grundlage muss zukünftig Open-Source-Software sein – sie schafft Unabhängigkeit, Sicherheit und eine größere Flexibilität.

For the electronic electronic records the standard should be Free Software:

Auf Basis von einheitlichen Arbeitsprozessen führen wir die elektronische Aktenführung (eAkte) verbindlich ein. Dabei machen wir den Einsatz von offener und freier Software sowie ressourcenschonender Informationstechnik (Green IT) bei hoher IT-Sicherheit zum Standard.

And they also mention that the use of Free Software goes without saying.

E-Government, vernetzte Mobilität und digitale Steuerungstechniken sowie der Einsatz von freier Software werden immer mehr zur Selbstverständlichkeit.

Die Linke Berlin has a clear position on Free Software. They say that "the public administration should switch to Free Software".

Auch die technische Arbeitsplatzausstattung muss endlich den modernen Anforderungen gerecht werden. Das betrifft sowohl die Hardware als auch die Software. Die öffentliche Verwaltung soll auf Open Source Software umgestellt werden. Vor allem braucht es endlich die technischen Voraussetzungen, um den Service einer digitalen Verwaltung umsetzen zu können.

Furthermore they also encourage the use of OER and also highlight the need for Free Software in education.

In diesem Bereich darf das Feld nicht privaten Unternehmen, Verlagen und Bildungsanbietern überlassen werden. Wir setzen uns für die Nutzung und die Erstellung offener Lehr- und Lernmaterialien (Open Educational Ressources, OER) sowie den Einsatz von Open-Source Software ein.

Although the German Pirate Parties wrote a lot about Free Software in the past, this year the Berlin Pirate Party just has one point about planned obsolescence. This is connected with Digital Restriction Management, as often software is used for that. The want to make sure that the country Berlin and the consumer protection associations get enough resources to better work against the planned obsolescence, that there is a label indicating how long the product and the software support for it will last, and that Berlin administrations just procures products with that label.

Hersteller werden angehalten, ihre Produkte mit einem voraussichtlichem “Haltbarkeitszeitraum” zu versehen. Dieses Haltbarkeitsdatum beinhaltet sowohl das physische als auch softwareseitige Leben eines Produktes. Auch müssen die Supportzeiträume (Softwareupdates etc.) auf dieser Kennzeichnung angegeben werden. [...] Wir setzen uns weiterhin dafür ein, dass die öffentliche Hand nur Produkte mit einer von der Verbraucherzentrale überprüften Haltbarkeit erwirbt.

A very strong demand for Free Software comes from the Berlin FDP, which is a real improvement for the party. They write that "Open Source Software offers many advantages for numerous services" and they demand "the use of Open Source Software". They add that "the use of proprietary software in the public administration should only be possible in well-founded individual cases".

Open Source Software bietet Vorteile bei zahlreichen Diensten. Wir fordern deshalb den Einsatz von Open-Source-Software. Der Einsatz proprietärer Software in der Verwaltung sollte nur in begründeten Einzelfällen stattfinden.

I did not yet manage to have a look at the election programs of the other parties. If you find something about Free Software there, or if I missed something in the ones mentioned above, please let me know.

There is still a lot of work leading up to the election. Our Berlin group will evaluate the answers to our questions. We will continue to add positive examples on our wiki page. And as with past elections, I hope that people in Berlin will ask their candidates questions about Free Software and share the answers with us.

If you do not have time to evaluate election programs, or ask your candidates questions, you can support such activities by becoming a sustaining member of the FSFE!


Update 2016-08-12 11:46: The Friedrichshain Kreuzberg Pirate Party pointed out to me that they have a seperate program, which contains several points about Free Software: The district should switch to Free Software for operating system, programs, and specialised programs for the public administration.

Der Bezirk soll Pilotbezirk für die Umstellung der kompletten Verwaltung auf quelloffene Software werden. Dazu gehören Betriebssystem, Anwendungsprogramme, Fachverfahren sowie der dafür erforderliche Support.

Employees in the district exchange should be trained with GnuPG to communicate encrypted with the citizens in the district.

Die Mitarbeiter*innen der Bezirksämter sollen durch Integration der notwendigen Software (GNUPG) und Schulung in die Lage versetzt werden, nach Möglichkeit verschlüsselt mit den Bewohnern des Bezirks zu kommunizieren.

Furthermore adult education centre should offer courses about encryption for files, and e-mail, Free Software, passwords and secure Web surfing.

Wir drängen darauf, dass die Volkshochschule als Bildungsort endlich Angebote für Fragen rund um Verschlüsselung von Dateien und Rechnern, sicheren Mailverkehr, freie Software, Passwörter und sicheres Surfen etabliert.

Thursday, 11 August 2016

The woes of WebVTT

free software - Bits of Freedom | 09:18, Thursday, 11 August 2016

Now this is a story all about how My life got flipped-turned upside down And I'd like to take a minute Just sit right there I'll tell you how I came to know the woes of Web-VTT.
The woes of WebVTT

It started with the FSFE's 15th birthday1, for which we produced an awesome video2. The FSFE has a truly amazing translators team (which you can and should join3) which quickly produced subtitles for the video in German, English, Greek, Spanish, French, Italian, Dutch, Turkish and Albanian.

Having a healthy distrust of too much client side JavaScript on our web pages, we were delighted to get an opportunity to try WebVTT, a standard for displaying subtitles together with a video with the web browser's native support. The standard has been around since ~2011 and most browsers do support it today.

As it turns out, it was not as easy as I thought to just add the WebVTT files to our HTML code. Here are the peculiarities I encountered along the way of making this work:

  1. WebVTT files look remarkably like SRT files (another standard) and our translators first created SRT files which we then turned into VTT files. In practice, it looks as if you just need to add a "WEBVTT" header to the file and be done. But the devil is in the details: turns out that the time format is slightly different in SRT and VTT. Where a time format of "00:00:05,817" works well in SRT, the equivalent in VTT should be "00:00:05.817". See the difference? It took me a while to see it.

  2. We're serving the video and the VTT files from a different server than our normal web server. This wasn't a problem with the video, but it turns out that browsers seem to behave differently when loading the VTT files and the fact that they were on a different server triggered cross origin requests which by default are not allowed for security reasons. Updating our download server to allow cross origin requests, and updating the HTML code to be explicit about the cross origin request solved that problem.

  3. Not only do you need to allow cross origin requests, but the VTT files need to have exactly the right mime type (text/vtt) when served from the web server. If they don't, the subtitles will be silently but ruthlessly ignored.

  4. As I mentioned, we have a lot of translations of the subtitles, but how do you actually get to see the different subtitles? Turns out most browsers just have a "ON" and "OFF" for subtitles. Here's what I learned:

    • Internet Explorer and Safari both present the user with a menu to select between the different subtitles. All good.
    • Chrome and Opera don't allow you to change the subtitle but instead makes a best-effort match between the browser language and the available subtitles. Okay, but why not let the user change too?
    • Firefox sucks. I'm sorry Mozillians, but from everything I've read, it seems Firefox ignored the browser language when selecting a subtitle and instead picks the default subtitle. It also does not allow you to change which subtitle is used, instead relying on the web page including some JavaScript and CSS to style it's own menu.

So much for standards :-)


Interview with Karl Berry (from 2002)

free software - Bits of Freedom | 08:11, Thursday, 11 August 2016

Interview with Karl Berry (from 2002)

Karl, you've been involved in one or another way with the GNU Project for quite some time now, working mostly with various aspects of TeX. Could you begin by telling us how you got involved in computers and what the first computer was that you wrote software for?

In 1977, my mom and I lived in California for the year so she could get a master's degree (in music) at Stanford. So I went to a middle school in Palo Alto, and they, being a fairly advanced school, had a programming class in BASIC and several yellow-paper teletypes in a back room. I never saw the actual computer -- it was at the high school, I think. The first program I specifically remember writing in that class was factorial computation (the FOR loop assignment), and it didn't take many iterations before the program halted with an overflow error :).

We moved back to our regular home in a tiny upstate New York town near the Canadian border the next year. As a faculty kid, I was lucky to be able to hang out in the local college's computer room, where they had a modem connection (using an acoustic coupler) to the Honeywell 66/40 computer at Dartmouth College, running a homegrown operating system originally named DTSS.

For the GNU Project, you've been working with TeX and fonts. Indeed, you currently maintain the texinfo and fontutils packages. But how did you first learn of the GNU Project, and what was it that led up to you being accepted into the FSF as an employee for some years?

I first learned about GNU in 1985 or so, via Paul Rubin (phr), who knew rms. I met rms shortly after that when he visited California and stayed overnight with my then-partner Kathryn Hargreaves and I.

We moved to Massachusetts to study with Bob Morris at the University of Massachusetts at Boston. We invited rms to give a talk at umb, and generally stayed in touch. After we got our degrees a couple of years later, we asked rms if he would hire us -- and he did! (After looking at some sample programs.) We were pysched.

During your time with the FSF, you also helped out to get Autoconf to successfully configure TeX, which I'm sure was no small task, and you also did some work on Ghostscript. What's your strongest memory from working with the FSF?

Although those projects were fun and valuable, my strongest technical memory is actually working on regex.c. POSIX was standardizing regular expressions at the time, and we implemented about 10 different drafts as the committee came out with new ones, while keeping compatibility with Emacs and all the other programs that used it. It was a nightmare. We ended up with regex.c having as many lines of debugging statements as actual code, just so we could understand what it was doing.

I've since looked at a bunch of other regex packages and it seems basically impossible to implement the regular expressions we've grown used to in any reasonable way.


My strongest nontechnical memory is rms's vision of free software and how clearly he communicated it and how strongly he held (and holds) to it. It was and is an inspiration to me.

What was it that got you interested in TeX?

Typography and letterform design have been innately interesting to me for as long as I can remember. In the 1980's, TeX and Metafont were just hitting their stride, and Kathryn and I designed and typeset numerous books and other random items with them. Don Knuth's projects are always fascinating on many levels, and it was natural to get pulled in.

Another thing you've been working on is web2c, which I'm sure that most people have never heard of, let alone know anything about even if they've heard something or another about it. Could you venture into the depth of knowledge and enlighten us?

Web2c is the core of the Unix TeX distribution, which comprises the actual tex',mf', and other programs developed as part of Knuth's projects at Stanford. Knuth wrote TeX in "web", his system for so-called literate programming, in this case a combination of TeX and Pascal. The result can be (has been) printed as a book as well as compiled into a binary.

Web2c (named web-to-c at the time) converts these web sources into C. It was originally written by Tom Rokicki, based on the original change files for the Pascal TeX on Unix, which were written by Howard Trickey and Pavel Curtis. Web2c was later extended substantially by Tim Morgan. I maintained it for a number of years in the 1990's, and Olaf Weber is the current maintainer.

The GNU Project has taken a lot of heat for using info documentation instead of standard manpages or later, DocBook or some other system for documentation. When did the GNU Project start using texinfo and what was the motivation? Do you have any comments on the newer systems for maintaining documentation?

rms invented Texinfo in 1985 or so, based on a print-only predecessor called BoTeX, which had antecedents in Bolio (at MIT) and Scribe (at CMU). At that time, there was no comparable system (as far as I know) that supported printable and on-line manuals from the same source.

Of course man pages existed, but I don't think anyone claims that man pages are a substitute for a full manual. Even Kernighan, Ritchie, and Thompson wrote troff documents to supplement the man pages, for information that doesn't fit into the NAME/SYNOPSIS/DESCRIPTION format.

Man pages certainly have their place, and despite being officially secondary in the GNU project, almost all GNU distributions do include man pages. There is a GNU program called help2man which can create respectable man pages from --help output, thus alleviating the burden of maintaining a separate source.

As far as DocBook and other XML-based systems go, I have nothing against them, but I think that Texinfo source is friendlier to authors. XML gets awfully verbose, in my experience. I've also noticed that O'Reilly books never contain internal references to explicit page numbers, just whole chapters or sections; I don't know where the deficiency lies, but it amuses me.

It seems to me that the ad hoc nature of Texinfo offends the people who like to create standards. If what you want to do is write reasonable documentation, Texinfo will get the job done with a minimum of fuss.

On a related note, people have occasionally suggested that the Info format is outdated now and we should just use HTML. I still find Info useful because I can read documentation without leaving Emacs. It is also handy to have manuals in (essentially) plain text, which HTML is not.

When you left the FSF as an employee, where did you go and what have you been up to these latest years? What do you work with today, and what does your future plans look like?

Aside from continuing to volunteer for the FSF, I worked as a programmer, system administrator, webmaster, release engineer, and various other odd computer jobs at Interleaf, Harvard, and now Intuit, due mostly to Paul English, a good friend I met at UMB. A significant part of all my jobs has been to install and maintain GNU and other free software, which has made me happy.

I expect to be able to leave my current job this fall and devote more time to volunteer work and my family.

What other hobbies, besides computing, do you have? I know you find Antarctica interesting. Would you mind sharing why? Any plans to try to visit some day?

Other hobbies - I read anything I can get my hands on (some favorite authors: Andrew Vachss, Barbara Kingsolver, Daniel Quinn, Stephen King, Terry Tempest Willams), and attempt to play piano (Bach and earlier, with some Pärt thrown in).

As for Antarctica, its untouched nature is what appealed to me most, although of course that quality has sadly diminished as human population continues to explode. I have no plans to visit there since tourism is very destructive to its fragile ecology (not to mention it is <emph>cold</emph>!).

And finally, I must ask you to convey one of your favourite recipies to us (and no, it can not be sour cream chocolate chip cake or chocolate chip cookies with molasses). :-)

Ok, how about some dinner to go before the desserts: Hungarian pork chops (with apologies to the vegetarians in the crowd). First we start with a little note on paprika courtesy of Craig Claiborne (author of the New York Times cookbooks):

It is ruefully true that American cooks by and large have only the most pallid conception of what PAPRIKA is. The innocuous powder which most merchants pass on to their customers as paprika has slightly more character than crayon or chalk.

Any paprika worthy of the name has an exquisite taste and varies in strength from decidedly hot to pleasantly mild but with a pronounced flavor.

The finest paprika is imported from Hungary and logically enough it is called Hungarian paprika or rose paprika. This is available packaged in the food shops of most first-rank department stores and fine food specialty shops. It is also available in bulk in Hungarian markets. [Not having any Hungarian markets in Salem, we get it from the bulk section of the organic grocery stores around here ... I don't know for a fact that it's from Hungary but it's definitely got more character than a Crayola :)]


Here's the recipe:

  • 6 pork chops
  • salt & pepper
  • 3 tbsp butter
  • 1/2 cup onion, chpped
  • 1 clove garlic, minced
  • pinch of thyme
  • 1 bay leaf
  • 3/4 cup chicken stock or dry white wine [we use a chicken bouillon cube, sorry craig]
  • 1 cup sour cream [best with "full fat"]
  • 1 tbsp paprika [or to taste, we usually use about 1/2 tbsp]

  1. Trim the fat from the chops [or not, we don't :)]. Sprinkle the meat with salt and pepper and saute in the butter in a skillet. [Takes about 15min on our stove, at medium heat; I usually get both sides just starting to brown. I chop up onion and stuff for step 2 while waiting.]
  2. Add the onion, garlic, thyme, and bay leaf and saute over medium-high heat until the chops are well browned on both sides. [15-20min, this is most of the cooking.]
  3. Lower the heat [quite a bit, but more than simmer; 2-3 on our stove] and add the chicken stock or wine [or bouillon cubed water in our case]. Cover and cook 30min. [I turn them over halfway through.]
  4. Remove the chops to a warm serving platter and keep warm. Reduce the pan liquid by half by boiling [or whatever seems appropriate, sometimes I don't need to boil anything away, sometimes I do]. Discard the bay leaf [or not].
  5. Add the sour cream and paprika to the skillet and heat thoroughly but do not boil [maybe 5min at medium heat]. Pour the sauce over the meat and serve hot.


We make rice on the side and use the sauce for both. Our best meal.

Wednesday, 10 August 2016

The FSFE's 15th Anniversary Video

Matthias Kirschner's Web log - fsfe | 11:48, Wednesday, 10 August 2016

As you might have read, the FSFE is celebrating its 15th anniversary this year at the FSFE Summit. I am already looking forward to meet many of you in Berlin from 2-4 September for the celebration.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/uK3eyOHcc7E" width="560"></iframe>

Beside the preparations for the summit we worked together with BrandAnimate and today published a 40 second video about the FSFE (for people who prefer those services or who like to embed the videos on their websites we also uploaded them on vimeo and youtube to make sharing easier.) We hope it will help you to explain Free Software to your colleagues, friends, and family. Our suggestion for the hashtag is #FSFE15.

At the moment of writing we have subtitles for Albanian, Dutch, English, French, German, Greek, Italian, Spanish, and Turkish. If your language is not covered yet, please download one of the subtitle files, and sent a translation to our translation team.

Furthermore we updated the FSFE's timeline to give an overview of our achievements since 2001. Feedback about the timeline is also highly welcome.

Interview with Chet Ramey (from 2002)

free software - Bits of Freedom | 11:42, Wednesday, 10 August 2016

Interview with Chet Ramey (from 2002)

In 2002--2003 I did a serious of interviews with people from the free software community, focusing on some of the members of our community who had been around since the very early days. One of the interviews I did was with Chet Ramey, one of the early contributors and maintainers of Bash. Here is that interview.

Two software projects you've been involved in over the years, and continue to maintain for the GNU Project, are bash and readline. How did you get involved in writing those?

Well, it started back in 1989.

Brian Fox had begun writing bash and readline (which was not, at that time, a separate library) the year before, when he was an employee of the FSF. The story, as I recall it, was that a volunteer had come forward and offered to write a Bourne shell clone. After some time, he had produced nothing, so RMS directed Brian to write a shell. Stallman said it should take only a couple of months.

At that time, in 1989, I was looking around for a better shell for the engineers in my department to use on a set of VAXstations and (believe it or not) some IBM RTs running the original version of AIX. Remember, back then, there was nothing else but tcsh. There were no free Bourne shell clones. Arnold Robbins had done some work, based on work by Doug Gwyn and Ron Natalie at the Ballistics Research Laboratory, to add things like editing and job control to various versions of the Bourne shell, but those changes obviously required a licensed copy to patch. Ken Almquist was in the midst of writing ash, but that had not seen the light of day either. Charles Forsyth's PD clone of the V7 shell, which became the basis of the PD ksh, was available, but little-known, and certainly didn't have the things I was looking for: command-line editing with emacs key bindings and job control.

I picked up a copy of bash from somewhere, I can't remember where, and started to hack on it. Paul Placeway, then at Ohio State (he was the tcsh maintainer for many years), had gotten a copy and was helping Brian with the readline redisplay code, and I got a copy with his changes included. (If you look at the tcsh source, the BSD `editline' library, and the readline source, you can tell they all share a common display engine ancestor.) I took that code, made job control work, fixed a bunch of bugs, and (after taking a deep breath) sent the changes off to Brian. He seemed impressed, incorporated the fixes into his copy, and we began corresponding.

When Brian first released bash to the world, I fixed all the bugs people reported and fed the fixes back to Brian. We started working together as more or less co-maintainers, since Brian was looking for someone to take over some of the work. I wrote a bunch of code and the first cut of the manual page.

After a while, Brian moved onto other things, and bash development stalled. I still needed to support bash for my department, and I liked working on it, so I produced several `CWRU' bash releases. Brian came back into the picture for a while, so we folded our versions together, resulting in bash-1.10. By the time bash-1.14 was released, Brian had eased out of the development, and I took over.

Bash is based on the POSIX Shell and Utilities Standard. But this standard was published in late 1992 after six years of work, several years after the first bash. How much ideas from bash went into the final POSIX standard, and how does the development of the standard continue today?

There were contributions from both directions while the POSIX standard (at least 1003.2) was being developed. RMS was a balloting member of the IEEE 1003.2 working group, and I believe that Brian was too, at least while he was employed by the FSF. I was on several of the mailing lists, and contributed ideas and reviews of proposals floated on the lists before being formalized.

Bash had test implementations of many of the newer features from POSIX.2 drafts, starting with about draft 9 of the original 1003.2, and I think that the feedback we provided, both as implementors and users, was useful to the balloting group. We didn't agree with all of the standard's decisions; that's why bash has a `posix mode' to this day.

The standards process continued with The Open Group and their Single Unix Specification. The IEEE chartered several working groups to modify the standards and produce new ones, and eventually in the fullness of time, the two groups combined their efforts. The result of that collaboration is the just-released SUS v3, which is also known as Posix-2002. Check out http://www.opengroup.org for more details and a web-accessible version of the standard.

When it comes to readline, it is developed together with bash, but has found a tremendous number of uses outside of bash. Like bash, it is licensed under the GPL. Have you ever been approached by a proprietary software developer wanting to have readline relicensed to that they could use it in their software?

Sure, all the time. It's not just commercial entities, either. Groups who release their code under a BSD or X11-style license have inquired as well.

The most common request is, of course, that the LGPL replace the GPL. I let the FSF handle those. Their point of view, as I understand it, is that for libraries without other compatible implementations, that GPL is the preferred license.

Of course, bash isn't the only Free Software shell available. The shell zsh, for example, prides itself in combining features from bash, ksh and tcsh. What do you think of those other shells?

zsh, pd-ksh, ash/BSD sh -- they're all fine. The projects have slightly different focuses, and in the true spirit of free/open-source software, scratch a slightly different itch.

There's been a fair bit of feature migration between the different shells, if little actual code. I'm on the zsh mailing lists, and the zsh maintainers, I'm sure, are on the bash lists. (After I answered a question about bash on a zsh list, there was a follow-up asking in a rather suspicious tone whether or not "we are being monitored".) I've also seen a statement to the effect that "the only reason zsh has [feature x] is that bash has it". The bash programmable completion code was inspired by zsh. I've certainly picked up ideas from pdksh also.

Since bash is so much used in scripting on various platforms, many people depend on its functionality remaining compatible between revisions. Has there ever been a time when you were tempted to introduce a new feature, but decided against it for the sake of compatibility?

Once or twice. People have been burned by new features in the past -- for instance, when the $"..." and $'...' quoting features were introduced. The previous behavior of those constructs was undefined, but Red Hat used them in some of their system scripts and relied on the old behavior. That's why the COMPAT file exists.

Usually, though, the question is whether or not old bugs should be fixed because people have come to depend on them. The change in the grammar that made {...;} work like POSIX says it should was one such change; changing the filename expansion code to correctly use locales was another. I generally fall on the side of correctness.

Other times I've hesitated to introduce new features because I was not sure that the approach was correct, and did not want to have to support the syntax or feature in perpetuity. Exporting array variables is one.

A final example is the DEBUG trap. Bash currently implements that with the ksh88 semantics, which say that it's run after each simple command. I've been kicking around the idea of moving to the ksh93 semantics, which cause the trap to be run before each simple command. Existing bash debuggers, including the rudimentary one in the distribution, rely on the current behavior.

My preferred method of changing bash is to make the new behavior the default, and have a configuration option for the old behavior.

Speaking of new features, how is the development going with bash today? Do you introduce new features regularly, or has bash reached a state of maturity?

Bash development is proceeding, and new features are still being
introduced. Reaching a state of maturity and introducing new features are not mutually exclusive. The new features are both user-visible and internal reimplementations.

I picked up my release methodology from the old UCB CSRG, which did BSD Unix, and introduce new features with even-numbered releases and emphasize stability in odd-numbered releases.

Programmable completion and the arithmetic for statement, for instance, were introduced in bash-2.04. Since the release of bash-2.05a, I've added shell arithmetic in the largest integer size a machine supports, support for multibyte characters in bash and readline, and the ability to use arbitrary date formats in prompt expansions. There will be some more new stuff before the next release.

Two programs you've authored yourself, but that are not part of the GNU project, are CE (Chet's Editor) and the EMM mail manager. Could you tell us briefly what prompted you to write them and if you continue to develop them today?

Well, let's see.

ce is a descendent of David Conroy's original microemacs. I got
interested in editors after I took a Software Engineering class my sophomore year in college in which we studied the `ed' clone from Kernighan and Plaugher's Software Tools in Pascal. That interest was reinforced when I got into a discussion with a campus recruiter who was interviewing me my senior year about the best data structure to use to represent the text in an editor. It's got it's own following, especially among Unix users at CWRU, but I've never publicized it. I still work on ce, and it's up to version 4.4. I have a version 4.5 ready for release, if I ever get around to doing the release engineering.

emm was my Master's project. It was driven by the desire to make a mail manager similar in features to the old MM-20 mail system, which I remember fondly from my DEC-20 days. I wanted to bring that kind of functionality to Unix, since at the time I started on it my department contained a number of old DEC-20 people moving to Unix. It's got a few users, though I don't think anyone's ever fetched it from ftp.cwru.edu. I still develop it, and find myself much more productive with it than any other mail program. There were days I'd get thousands of mail messages a day -- thankfully, in the past -- and it's easy to manage that volume of mail once you're used to emm.

When was your first interaction with computers and what was it about them that made you get involved in writing your own code for them?

Believe it or not, I really didn't use computers until I got to
college. And when I got there, I really didn't have any intention of concentrating in computing. I met the right people, I guess, and they were influential. At that time, lots of guys in my fraternity were studying Computer Engineering, and I found it fascinating. Still do.

You currently live in Cleveland, Ohio where you work at Case Western Reserve University. This, incidently, is also where you graduated with a B.S. in Computer Engineering and from where you received your Master's in Computer Science almost nine years ago. If I calculate this correctly, you've now worked about 13 years at CWRU. What's so great about working there that has made you stay all these years?

It's been longer than that. Counting my time as a student, I've been working for essentially the same department for 17 years.

I guess what I find attractive about working at CWRU has to do with the university environment: the informality, the facilities available at a university, and the academic community. The people I work with are great, too, and a bunch of them have been at CWRU for years and years.

One of your spare time hobbies is sports. Everything from sailing to golfing. No, wait, golfing is one of sports you really dislike. Why's that?

It just has no appeal for me. I can't think of a sport I find more boring (well, there's curling, but nobody knows what that is). I'm sure people feel the same way about sports I like, too. Think about how you'd feel about watching an America's Cup yacht race on television.

Sports you do enjoy though includes sailing, basketball and football. Did you ever get around to participating in them yourself?

I still play basketball once or twice a week. I used to play three or four times a week, back before I had back surgery, but haven't been able to find the time to play that much since.

I do like to sail, but I don't own a boat (a sailboat is a big hole in the water you pour money into). I had a chance to crew for someone last summer, but things never came together.

Between your two kids, work, bash, readline, five cats, a dog, fishes and sports, do you ever find some spare time or do you have some trick to share with us about how you manage to find time for everything you like?

No, I have no magic to share. I have the same struggles to find time to do everything as everybody else. I have it easy compared to my wife, though. I'm amazed that she packs as much into her days as she does.

Finally, where did you go for vacation last year and what's your plans for this years vacation?

Last year? Let's see. I actually went on two vacations.

On the first, I helped chaperon a group of high school students on a trip to Europe. Every other year, my sister-in-law, a high-school Latin teacher in Pennsylvania, gets together with the German teacher at her school and takes a group of students to Italy and Germany. We've also been to Austria, Switzerland, France, and England on these trips. They're both early-to-bed, early-to-rise people, so I get to handle the late-night incidents, like the time a student broke her hotel room key off in the door, or when one plugged in a hair drier and blew the fuses for an entire hotel floor.

The second was a family vacation. My parents, brothers, sister-in-law, niece and nephew, and my wife and kids spent a week in a rented house in North Carolina. That was the first family vacation we'd had in a long time, and it was great.

This year? I'm not sure.

Liberate your Reviews

Marcus's Blog | 09:40, Wednesday, 10 August 2016

I guess all of you have once written a product review on popular platforms like Yelp, IMDB, TripAdvisor, Amazon.com, Goodreads or Quora. Have you ever asked yourself who owns it? Yes, it’s you and now you can finally gain back control over it.

Just install the Free Your Stuff! Chrome extension and access all of your reviews on the supported platforms. You can decide to either download them or share them under a Free License.

lib.reviews

With lib.reviews this has been taken one step further. It is a free and open platform for reviewing absolutely anything, in any language.

If you want to get started, just stick with the Chrome extension. lib.reviews currently requires an invitation code, which you could be requested e.g. via IRC #libreviews on irc.freenode.net.

Monday, 08 August 2016

Exploring Alternative Operating Systems – GNU/Hurd

David Boddie - Updates (Full Articles) | 21:15, Monday, 08 August 2016

With the EOMA68 Crowd Supply campaign underway, and having pledged for a couple of micro-desktop systems, I'm inspired to look at operating systems again. I'm interested in systems that aren't based on the Linux kernel, but I'm not sure how far I want to roam away from the classic GNU user-space experience. I don't know if any systems I find will run on the A20 CPU card; in any case, I want to explore alternatives to Linux on the x86-based hardware I already have. The first operating system I'm going to try is GNU/Hurd.

Installing Debian GNU/Hurd

One relatively easy route to having a GNU/Hurd system is to install Debian GNU/Hurd – not on an ARM-based platform, but on one of my existing x86 hardware platforms. In preparation for that, I've been trying to get up and running with the images from Debian. I had a few false starts with these, being able to create disk images that would boot up in kvm but which appeared to have incorrectly configured keys for the apt repositories. In the end, I stepped back and tried to install something from the same site as the repositories, reasoning that it wouldn't make sense to have an installer that couldn't access its own repositories.

After a couple of attempts to do this which were fine in kvm, but which wouldn't fit nicely onto the removeable media I have, I finally settled on the following recipe for creating a working bootable image:

qemu-img create hurd-install.qemu 14.9G
wget http://ftp.ports.debian.org/debian-ports-cd/hurd-i386/current/debian-hurd-2015-i386-NETINST-1.iso
sha1sum debian-hurd-2015-i386-NETINST-1.iso
# Check the hash of this against those listed on this page.
kvm -m 1024M -drive file=hurd-install.qemu,cache=writeback -cdrom debian-hurd-2015-i386-NETINST-1.iso -boot d -net user -net nic

We first create a raw disk image which will fit in both the 16 GB Compact Flash and micro-SD cards I use for booting GNU/Hurd on my old laptop. We then fetch the bootable CD-ROM image used for installation and check it against one or more of the hashes published on the Debian ports site. (Ideally, we would get the hashes from somewhere with a https URL.) Finally, we boot up a virtual machine with the empty hard drive image and installer CD-ROM image, plenty of RAM, and network access.

In the virtual machine, I chose the textual installer. When partitioning the disk, I allocated about 6 GB for the system in a primary partition and about 1 GB for swap in a logical partition, leaving the rest for other operating systems. I only installed the suggested sets of packages, deferring the choice of desktop environment until later. When the installer finished, I let it reboot into the boot menu where I chose to boot from the hard drive so that I could check things and further configure the system.

Configuring the System

If everything goes to plan, the system should boot to a login prompt. I logged in as root then immediately set about reconfiguring the apt sources and keyrings, beginning with some changes to the sources list:

nano /etc/apt/sources.list

In this file, I changed the final two lines to refer to the packages on the same site as the installer:

deb http://ftp.ports.debian.org/debian-ports unreleased main
deb-src http://ftp.ports.debian.org/debian-ports unreleased main

Then, I updated the keyrings used to verify the packages:

wget https://www.ports.debian.org/archive_2016.key
dpkg -r debian-ports-archive-keyring
apt-key add archive_2016.key 
apt-get update

I found this to be necessary because the packages are signed with a key that is newer than the key in the supplied debian-ports-archive-keyring package. Although I think this should have worked with the disk images produced by other installers I found, I didn't manage to resolve the issue with the signing key, but at least it works with the image produced by this installer.

I configured one or two more things, like sshd, so that I can potentially run the laptop without a monitor. At the moment, I use a D-SUB cable to display the video output using the same monitor as my desktop machine but this means switching between the two outputs when switching systems, which is a bit tiresome.

Copying the Disk Image

As explained in my previous article, I needed to copy the disk image onto two separate items of removeable media. Fortunately, in my case, this was done using the same command since each device was mounted in turn and appeared as /dev/sdb. If you do this, make sure that the media you are installing the disk image on is actually this device and change the following command accordingly.

# Make sure that /dev/sdb is the media you want to install on:
dd if=hurd-install.qemu of=/dev/sdb bs=1M
eject sdb

It's important to cleanly eject removeable devices. If anything, it prevents the annoying situation where the disk needs to be checked at boot time, requiring manual intervention.

Booting Up

Finally, I plugged the CompactFlash card into its adapter and the USB micro-SD adapter into one of the USB ports on my laptop and turned it on. This generally works, booting to a login prompt as it does with kvm, but sometimes the laptop isn't happy about the USB device. It's especially unhappy if I try to boot with a USB keyboard attached while it boots from the USB adapter. Still, once booted, it means I have a real GNU/Hurd machine to play with at last!

Where will the FSFE be next?

free software - Bits of Freedom | 13:00, Monday, 08 August 2016

Where will the FSFE be next?

On the 19th of July, the FSFE had its first ever Event Coordination Meeting. As a volunteer organisation, the FSFE participate at more conferences than even I know about. Just last year, we were at FOSDEM (Belgium), T-Dose (Netherlands), FiFFkon (Germany), RMLL/LSM (France), Veganmania (Austria), Euskal Encounter (Spain), 32C3 (Germany), Linuxtage (Austria), Chaos Cologne Conference (Germany), Linuxwochen (Austria), FrOSCon (Germany), DebConf (Germany) and OpenTech Summit (Germany).

I've probably missed a few, and there have also been people from the FSFE participating at events, even if the FSFE did not exhibit there. While most of the events where we participate are organised by our volunteers, and most of them decide for themselves which events they want to participate in, we've been wanting to get the people who organise events for us in better contact both with each other, and with our staff who support them (with flyers, merchandise, and so on).

We'll have these event coordination meetings regularly, and we meet virtually in a Jabber chatroom, so that as many people as possible can participate if they want to. Even if you don't have an event that you help us participate in, you may know of some event where the FSFE should participate, and the meetings are a good way to make this known (aside from adding it directly to our wiki).

The meeting we had on the 19th was pretty uneventful in that it was our first one and most of the work was spent reviewing our current list of events and making sure everyone was up to speed but the general concept of such meetings seem to make sense and in general, we want to keep them but make sure for future meetings to very explicitly invite the volunteers who we know are organising events in the near future.

But you're waiting for the events, right? Okay, here's what we came up with (you can find everything on our events page on our wiki, or on our web pages -- we will eventually need to combine the two):

  • FrOScon (St. Augustin, Germany, the 20-21st of August): Max, Polina and hopefully some Bonn fellows will coordinate and organise a booth for the FSFE there.
  • FSFE Summit (Berlin, Germany, the 2nd-4th of September): Organised by the FSFE (Erik is in charge with help from our intern Cellini)
  • Rotlintstraßenfest (Frankfurt, 10th of September): Our local Fellowship group in the area will participate to inform about free software.
  • Kieler Linuxtage (Kiel, the 16th-17th of September): A booth organised by Dominic Hopf
  • Wear Fair (Linz, Austria, the 23rd-24th of September): Our amazing Austrian team is putting up a booth for us
  • LinuxCon Europe (Berlin, the 4th-6th of October): Olga and Polina are organising a booth there, thanks to a generous offer from the Linux Foundation to find us a place at the event.
  • OpenRheinRuhr (Oberhausen, Germany, the 5th-6th of November): Max, Erik, Matthias and others helping to organise a booth.
  • Paris Open Source Summit (Paris, the 16-17th of November): Polina will hopefully be there to give a talk.
  • FOSDEM (Brussels, Belgium, the 4-5th of February 2017): Reinhard is organising our booth.

Aside from these, there are also Fellowship and volunteer meetings in Hamburg, Bamberg, Düsseldorf and Frankfurt. So if you're anywhere in the area of any of these events, drop by and say hi!

For our next coordination meeting, we'll try to cover larger parts of 2017 as well and discuss a bit more about how we can shape our Wiki information and content so we have the relevant information about each event and can make sure our staff help send out whatever merchandise and information material is needed well in time.

And if you think the FSFE should be attending an event you know about, just join the next meeting and let us know, or get in touch with me right away: jonas@fsfe.org.

Sunday, 07 August 2016

Another Laptop Project

David Boddie - Updates (Full Articles) | 13:21, Sunday, 07 August 2016

Inspired by the EOMA68 Crowd Supply campaign's ideas to reduce e-waste and shake up laptop design, I dug out my old Samsung V20 laptop to see if I could put it to work doing something useful.

Laptop Legacy

I bought the laptop at the end of 2002 when I was “between jobs,” as people like to say. I wasn't terribly happy with the quality of the sound output – it's not much fun listening to music with the headphones on when scrolling a window produces a faint high pitched whine – and neither the retailer (now defunct, unsurprisingly) nor Samsung's customer support company in Europe were very helpful or sympathetic. The laptop paid many visits to the support company over the years, where they failed to diagnose problems with it, accidentally left a test hard drive in it, and helpfully sent a new power supply to me when they presumably thought they'd mislaid mine. When I told them about the hard drive, the guy on the phone told me to stick an address label on it and post it back to them!

It's a peculiar beast. Produced in 2002, it has a 2 GHz Pentium 4 CPU, which some might argue was a bit of a strange choice for a laptop CPU, though I suppose it could be a Pentium 4-M. One unfortunate consequence of this is that you have a CPU that can run quite hot in an enclosed space. As a result, the designers decided to put in some impressive plumbing and two fans. One is clearly not enough!

From memory, the graphics chipset is an Intel 82845GV, which became supported under Linux after a while. One of the annoying things was that the internal modem was a so-called “winmodem”, which meant that there was initially no software to drive it under Linux. In the end, an individual who worked at the modem vendor released their own kernel module, which I seem to remember was something you built from source — how things have changed since then. Now, you might feel lucky to get the source code to a vendor's kernel module.

Rebooting the Laptop

The first thing I had to do was to unlock the BIOS. I had protected it with a password many years ago and forgotten what it was. Fortunately, I remembered finding a solution online and went looking for it again. The discussion I found may not have been the one I was looking for, but the suggestion about removing the battery connection to the CMOS RAM worked. Readers with long memories might find it interesting to see who it was that made the suggestion.

As with another old machine I have lying around, I decided to replace the hard drive with more modern removeable media. Ideally, I would have used an IDE-SD card solution, but I didn't find any that would fit a laptop 2.5" hard drive bay from vendors I'd be happy to send money to. That might sound harsh, but one of the downsides of online commerce these days is the slew of “bargains” from price aggregators, auction sites and e-commerce marketplaces that fill search engine results. I'd like to be able to trace the original manufacturer of the part I'm using, if at all possible, and I'd like to have some confidence that I'm the first person who gets to use it in the wild.

I already have an IDE-SD adapter that I use in place of a desktop 3.5" hard drive, but it seems that only adapter cables for using 2.5" hard drives in 3.5" bays are being made. Disappointed, I ended up buying a 2.5" IDE-CompactFlash adapter and a fairly cheap CompactFlash card with a reasonable capacity. The adapter has a nice case that is about the same size as a 2.5" hard drive, though it doesn't fit as snugly as a real hard drive in the laptop's drive caddy, so it wouldn't fit in the case.

It was clearly time to open up the laptop again – a task which became increasingly necessary while I was using it as my main workstation, and one which I never looked forward to. This is where the design of the laptop in the EOMA68 campaign has the advantage over those of traditional laptops. Opening up my old laptop was the usual ordeal of prising apart snappy, sharp plastic pieces with screwdrivers and fingernails while trying to avoid having them pinch any nearby skin. Once open, I pulled the whole motherboard out so that I could access all the ports and connectors. I don't think I'll be putting it back into its case.

The CompactFlash card now sits in its adapter, attached to the IDE interface. Unfortunately, while it recognises that the card is there, the laptop's BIOS won't boot from it. This may be something to do with the way the card declares itself — apparently, they can be “removeable” or “fixed”, and this one is removeable. Rather unhelpfully, the vendors of CF cards have been less than transparent about which of their cards fall into each category, and SanDisk no longer provides a utility to reprogram removeable cards to maked them fixed ones. I doubt I'd have been able to run it on Debian, anyway.

While investigating this issue I found that I could perform the initial boot from a USB SD card adapter and switch to the CF card during initialisation. This is how I'll manage for the time being. If I pick up a suitable CF card in the future, I'll start using that. I may dedicate a low capacity SD card to the task of holding kernels I want to boot.

What Next?

As I mentioned, I'd like to have a choice of kernels to boot from. I used to run Linux on this laptop, but now I think it's time for it to experience something different. Once I've set up the first system I want to try out, I'll write something about my experiences. Hopefully, that won't take too long.


Updates

After looking in the laptop's manual, I found that the chipset used was the Intel 82845GV not 82845GL. These are listed in the relevant article on Wikipedia. For reference, the CPU used in one of those in Wikipedia's list of Pentium 4 processors.

Wednesday, 03 August 2016

ChaosKey

Bits from the Basement | 21:17, Wednesday, 03 August 2016

I'm pleased to announce that, at long last, the ChaosKey hardware random number generator described in talks at Debconf 14 in Portland and Debconf 16 in Cape Town is now available for purchase from Garbee and Garbee.

The Cat Model of Package Ownership

Elena ``of Valhalla'' | 18:02, Wednesday, 03 August 2016

The Cat Model of Package Ownership

Debian has been moving away from strong ownership of packages by package maintainers and towards encouraging group maintainership, for very good reasons: single maintainers have a bad bus factor and a number of other disadvantages.

When single maintainership is changed into maintainership by a small¹, open group of people who can easily communicate and sync with each other, everything is just better: there is an easy way to gradually replace people who want to leave, but there is also no duplication of efforts (because communication is easy), there are means to always have somebody available for emergency work and generally package quality can only gain from it.

Unfortunately, having such group of maintainers for every package would require more people than are available and willing to work on it, and while I think it's worth doing efforts to have big and important packages managed that way, it may not be so for the myriad of small ones that make up the long tail of a distribution.

Many of those packages may end up being maintained in a big team such as the language-based ones, which is probably better than remaining with a single maintainer, but can lead to some problems.

My experience with the old OpenEmbedded, back when it was still using monotone instead of git² and everybody was maintaining everything, however, leads me to think that this model has a big danger of turning into nobody maintains anything, because when something needs to be done everybody is thinking that somebody else will do it.

As a way to prevent that, I have been thinking in the general direction of a Cat Model of Package Ownership, which may or may not be a way to prevent some risks of both personal maintainership and big teams.

The basic idea is that the “my” in “my packages” is not the “my” in “my toys”, but the “my” in “my Cat, to whom I am a servant”.

As in the case of a cat, if my package needs a visit to the vet, it's my duty to do so. Other people may point me to the need of such a visit, e.g. by telling me that they have seen the cat leaving unhealty stools, that there is a bug in the package, or even that upstream released a new version a week ago, did you notice?, but the actual putting the package in a cat carrier and bringing it to the vet falls on me.

Whether you're allowed to play with or pet the cat is her decision, not mine, and giving her food or doing changes to the package is usually fine, but please ask first: a few cats have medical issues that require a special diet.

And like cats, sometimes the cat may decide that I'm not doing a good enough job of serving her, and move away to another maintainer; just remember that there is a difference between a lost cat who wants to go back to her old home and a cat that is looking for a new one. When in doubt, packages usually wear a collar with contact informations, trying to ping those is probably a good idea.

This is mostly a summer afternoon idea and will probably require some refinement, but I think that the basic idea can have some value. Comments are appreciated on the federated social networks where this post is being published, via email (valid addresses are on my website http://www.trueelena.org/computers/articles/the_cat_model_of_package_ownership.html and on my GPG key http://www.trueelena.org/about/gpg.html) or with a post on a blog that appears on planet debian http://planet.debian.org/.

¹ how small is small depends a lot on the size of the package, the amount of work it requires, how easy it is to parallelize it and how good are the people involved at communicating, so it would be quite hard to put a precise number here.

² I've moved away from it because the boards I was using could run plain Debian, but I've heard that after the move to git there have been a number of workflow changes (of which I've seen the start) and everything now works much better.

Thursday, 28 July 2016

kvm virtualization on a liberated X200, part 1

Elena ``of Valhalla'' | 16:08, Thursday, 28 July 2016

kvm virtualization on a liberated X200, part 1

As the libreboot website warns https://libreboot.org/docs/hcl/x200.html: there are issues with virtualization on x200 without microcode updated.

Virtualization is something that I use, and I have a number of VMs on that laptop, managed with libvirt; since it has microcode version 1067a, I decided to try and see if I was being lucky and virtualization was working anyway.

The result is that the machines no longer start: the kernel loads, and then it crashes and reboots. I don't remember why, however, I tried to start a debian installer CD (iso) I had around, and that one worked.

So, I decided to investigate a bit more: apparently a new installation done from that iso (<key>debian-8.3.0-amd64-i386-netinst.iso</key>) boots and works with no problem, while my (older, I suspect) installations don't. I tried to boot one of the older VMs with that image in recovery mode, tried to chroot in the original root and got <key>failed to run command '/bin/bash': Exec format error</key>.

Since that shell was lacking even the file command, I tried then to start a live image, and choose the lightweight <key>debian-live-8.0.0-amd64-standard.iso</key>: that one didn't start in the same way as the existing images.

Another try with <key>debian-live-8.5.0-i386-lxde-desktop.iso</key> confirmed that apparently Debian > 8.3 works, Debian 8.0 doesn't (I don't have ISOs for versions 8.1 and 8.2 to bisect properly the issue).

I've skimmed the release notes for 8.3 https://www.debian.org/News/2016/20160123 and noticed that there was an update in the <key>intel-microcode</key>https://packages.debian.org/jessie/intel-microcode package, but AFAIK the installer doesn't have anything from non-free, and I'm sure that non-free wasn't enabled on the VMs.

My next attempt (thanks tosky on #debian-it for suggesting this obvious solution that I was missing :) ) was to run one of the VMs with plain qemu instead of kvm and bring it up-to-date: the upgrade was successful and included the packages in this screenshot, but on reboot it's still not working as before.

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/ecb0e193b16fbd507d0148636177961b

Right now, I think I will just recreate from scratch the images I need, but when I'll have time I'd like to investigate the issue a bit more, so hopefully there will be a part 2 to this article.

Wednesday, 27 July 2016

Retroactively replacing git subtree with git submodule

emergency exit | 13:38, Wednesday, 27 July 2016

Combining multiple git repositories today is rather common, although the means of doing so are far from perfect. Usually people use git submodule or git subtree. If you have used neither or are happy with either method this post is completely irrelevant to you.

But maybe you decided to use git subtree and like me are rather unhappy with the choice. I will not discuss the upsides and downsides of either approach, I assume you want to change this, and you wish you could go back in time and make it right for all subsequent development. So this is what we are going to do :)

This was an exercise in git for me so I am sure more experienced people might know shortcuts, but since it took me quite some time to figure out right, I hope it is going to be helpful to others. It is however strongly recommended to read the shell scripts and possibly adapt them to your situation. I will not be held responsible for any problems!

It should be noted that by definition any changes in your git history will break current clones and forks of your repository so only do this if you are very sure what you are doing. And be nice and inform your users about this ASAP.

Pre conditions

  • you have a git subtree, e.g. of a library, inside your repo and you want to replace it with git submodule
  • going back to a commit in your history you should still get the same version of the library that you had previously included with git subtree
  • i.e. every subtree update should be replaced with a submodule update of the same contents and also timestamp (because this is timetravel, right?)
  • all tags should be preserved / rewritten to their corresponding new IDs
  • when updating your git subtree previously you used the “squash” feature (if you didn’t do this, it is going to be a lot harder)
  • you have a mostly straight branch history and are ok with loosing merge commits; all branches other than master will be rebased on master

When I say “preserve timestamp” it is important to clarify what this means: git has two notions of “timestamp”, the “author date” and the “committer date”. The author date is the date where the commit was actually created. It is apparently entirely without relevance for git branches and their structure. The committer date on the other hand is the time where the commit was included in the branch. This is often the same, but when doing operations on a branch like cherry-picking or rebasing then it is overwritten with the current date. Unfortunately git interfaces like github don’t use either date consistently so if you want a consistent appearance you need to also preserve the committer dates. Editing these however can have adverse effects potentially breaking branches or tags, because the correct chronological order of committer dates is important. We will take extra precautions below, but if you branches somehow get mangeled, or a commit appears out of place with gitg than you should double check the committer dates.

Preperation

Make a local clone of the repository and after that remove all your “remotes” with git remote remove . This ensures that you don’t accidentally push any changes…

I assume that you have checked out the master branch and that you have exported the following environment variables:

CLONE="~/devel/myapp"                      # the directory of your clone
SUBDIR="include/mylib"                     # the subdir of the subtree/submodule relative to CLONE
SUBNAME="mylib"                            # name of the submodule (can be anything)
SUBREPO="git://github.com/mylib/mylib.git" # submodules' repo

ATTENTION: don’t screw up any of the above paths since we might be calling rm -rf on them.

If you are uncomfortable with doing search and replace operations on the command line, you can set your editor to something easy like kwrite:

export EDITOR=kwrite

Replacing the subtree and subtree updates

Since we are going to delete all the merge commits and also the commits that represent changes to the subtree, we need to remember at which places we later re-insert commits. To do this, run the following:

git log --format='%at:::%an:::%ae:::%s' --no-merges | awk -F ':::' '
(PRINT == 1) && !($4 ~ /^Squashed/) {
    PRINT=0
    printf    $1 ":::" $2    ":::" $3    ":::" $4   ":::"  # commit that we work on
    printf tTIME ":::" tNAME ":::" tMAIL ":::" tREF "\n"   # commit that we insert
};
(PRINT != 1) && ($4 ~ /^Squashed/) {
    PRINT=1
    tTIME=$1
    tNAME=$2
    tMAIL=$3
    tREF=substr($0, length($0) - 6, length($0)) # cut commit id from subj
}; ' > /tmp/refinserts

What it does is find for every commit that starts with “Squashed” the subsequent commit’s subject and associate with it time of the subtree update and the subtree’s commitID, seperated with “:::“. We are paring this information with the subject line and not the commit ID, because the commit IDs in our branch are going to change! It also collapses subsequent updates to one. NOTE that if you have other commits that start wit “Squashed” in there subject line but don’t belong to the subtree, instead filter for Squashed '${SUBDIR}' (beware of the quotes!!).

Now create a little helper script, e.g. as /tmp/rebasehelper:

#!/bin/sh

reset()
{
    TIME_NAME_MAIL_SUBJ=$(git log --format='%at:::%an:::%ae:::%s:::' -1)
    # next commit time is last commit time if not overwritten
    export GIT_AUTHOR_DATE=$(git log --format='%at' -1)
}

cp /tmp/refinserts /tmp/refinserts.actual
reset

while $(grep -q -F "${TIME_NAME_MAIL_SUBJ}" /tmp/refinserts.actual); do

    LINE=$(grep -F "${TIME_NAME_MAIL_SUBJ}" /tmp/refinserts.actual)
    export  GIT_AUTHOR_DATE=$(echo $LINE | awk -F ':::' '{ print $5 }')
    export  GIT_AUTHOR_NAME=$(echo $LINE | awk -F ':::' '{ print $6 }')
    export GIT_AUTHOR_EMAIL=$(echo $LINE | awk -F ':::' '{ print $7 }')
                        REF=$(echo $LINE | awk -F ':::' '{ print $8 }')

    if [ ! -d "${SUBDIR}" ]; then
        echo "** First commit with submodule, initializing..."
        git submodule --quiet add --force --name ${SUBNAME} ${SUBREPO} ${SUBDIR} > /tmp/rebasehelper.log
        [ $? -ne 0 ] && echo "** failed:" && cat /tmp/rebasehelper.log && break

        echo "** done."
    fi

    echo "** Updating submodule..."
    cd "${SUBDIR}"
    git checkout --quiet $REF > /tmp/rebasehelper.log
    [ $? -ne 0 ] && echo "** failed:" && cat /tmp/rebasehelper.log && break

    echo "** Committing changes..."
    cd ${CLONE}
    git commit --quiet -am "[${SUBNAME}] update to $REF" > /tmp/rebasehelper.log
    [ $? -ne 0 ] && echo "** failed:" && cat /tmp/rebasehelper.log && break

    echo "** Continuing rebase."
    rm /tmp/rebasehelper.log
    grep -v -F "${TIME_NAME_MAIL_SUBJ}" /tmp/refinserts.actual > /tmp/refinserts.new
    mv /tmp/refinserts.new /tmp/refinserts.actual
    git rebase --continue
    reset
done

if [ -d "${CLONE}/.git/rebase-merge" ] || [ -d "${CLONE}/.git/rebase-apply" ]; then
    echo "The current rebase step is not related to subtree-submodule operation or needs manual resolution."
    echo "Try 'git mergetool', followed by 'git rebase --continue' or just the latter."
fi

We will call this later.

Filter-Tree

Now we actually remove references to the subtree from our history so that future operations create no conflicts:

git filter-branch --tree-filter 'rm -rf '"${SUBDIR}" HEAD

This may take some time. Other sources recommend --index-filter but that will not work because the file-references in the subtree are not relative to our repository, but to the SUBDIR. If this command doesn’t actually remove the directory, make sure to run rm -rf "${SUBDIR}".

Rebase

Now we do the rebase:

git rebase --interactive $(git log --format='%H' | tail -n 1)

Which will open your commit history in the EDITOR that you configured. NOTE that this is in chronological order, not reverse chronological like the git log command.

In the editor you now want to remove all lines that contain “Squashed” and mark the previous commit for being edited. This is multiline-regex with substitution, but in kwrite this is very straightforward:

Search:  pick(.*\n).*Squashed.*\n
Replace: e \1

This will only miss subsequent “double” updates which can safely be removed:

Search:  .*Squashed.*\n
Replace:

Save and close the editor, you will now be at the first commit that you are editing. This is the original point in time where you added your subtree, source the helper script with . /tmp/rebasehelper. If there were never any conflicts in your tree the script should run through completely and you are done. It is important to source the script with a leading . because it sets some environment variables also for your manual commits.

However, if you did have conflicts, you will be interrupted to resolve these manually, usually by git mergetool and then git rebase --continue. Whenever the rebase tells you “Stopped at…”, just call . /tmp/rebasehelper again and keep repeating the last steps, until the rebase is finished. If you are doing the whole thing on multiple branches you might want to run git rerere before every `git mergetool`, it might save you some merge steps.

Unfortunately the rebase will set all commit dates to the current date (although the “author date” is preserved). There is a an option that prevents exactly this, but which cannot be used together with interactivity, because the interactivity introduces commits with a newer author date (which is okay in itself but committer dates need to be chronological for rebase). We write this little scriplet to /tmp/redate to help us out:

#!/bin/sh
git filter-branch --force --env-filter '
    LAST_DATE=$(cat /tmp/redate_old);
    GIT_COMMITTER_DATE=$( (echo $LAST_DATE; echo $GIT_AUTHOR_DATE) | awk '"'"'substr($1, 2) > MAX { MAX = substr($1, 2); LINE=$0}; END { print LINE }'"'"');
    echo $GIT_COMMITTER_DATE > /tmp/redate_old;
    export GIT_COMMITTER_DATE' $@

What it does is set the COMMITTER_DATE to the AUTHOR_DATE unless the COMMITTER_DATE would be older than the last COMMITTER_DATE (which is illegal) — in that case it uses the same COMMITTER_DATE as the previous commit. This ensures correct chronology while still having sensible dates in all cases.

And then we call the script with some pre and post commands:

echo '@0 +0000' > /tmp/redate_old
/tmp/redate
git log --format='%ad' --date=raw -1 | awk '{ print "@" $1+1 " " $2 }' > /tmp/redate_master

(initialization and saving master’s final timestamp+1 for later)

Multiple Branches

For all other affected branches it is now much simpler. First checkout the branch. If your subtree/submodule is large, you might have to delete ${SUBDIR} before that with rm -rf.

Then filter the tree as described above, you might need to add force to overwrite some backups:

git filter-branch -f --tree-filter 'rm -rf '"${SUBDIR}" HEAD

Followed by a rebase on the already-fixed master branch:

git rebase --interactive master

You should now see all diverging commits, plus all the “Squashed*” commits. If you did make subtree updates on other branches and you want to retain them, then apply the same mechanism as for the master branch. If subtree changes in the other branch are nor important, you can just remove all of them from the file.

Now the committer dates in the part of the branch that are identical to master are correct (because we fixed those earlier), but those in the new part have updated commit times. We can just use our previous scriptlet, but this time we initialize with the master’s last timestamp and we only operate on the commits that are actually new:

cp /tmp/redate_master /tmp/redate_old
/tmp/redate_master master..HEAD

voila, and repeat for the other branches!

Rewiring the tags

Currently all your tags should still be there and also still be valid. Print them to a tmpfile and open them:

git tag -l > /tmp/oldtags
$EDITOR /tmp/oldtags

Review the list of tags and remove all tags that belonged to the submodule or that you don’t want to keep in the new repository from the file.

Now checkout master again, then run this nifty script (reed the note below first!):

#!/bin/sh

OLDIFS=$IFS
IFS='
'

TAGS=$(cat /tmp/oldtags)

for TAG_NAME in $TAGS; do
    IFS=$OLDIFS
    TAG_COMMIT=$(git for-each-ref --format="%(objectname)" refs/tags/${TAG_NAME})
    TAG_MESSAGE=$(git for-each-ref --format="%(contents)" refs/tags/${TAG_NAME})

    # the commit that is referenced by the annotated tag in the original branch
    ORIGINAL_COMMIT=$(git log --format='%H' --no-walk ${TAG_COMMIT})
    ORIGINAL_COMMIT_DATE=$(git log --format='%at' --no-walk ${ORIGINAL_COMMIT})
    ORIGINAL_COMMIT_SUBJECT=$(git log --format='%s' --no-walk ${ORIGINAL_COMMIT})
    # the same commit in our rewritten branch
    NEW_COMMIT=$(git log --format='%H:::%at:::%s' | awk -F ':::' '
    $2 == "'"$ORIGINAL_COMMIT_DATE"'" && $3 == "'"$ORIGINAL_COMMIT_SUBJECT"'" { print $1 }')

    # overwrite git environment variables directly:
    export GIT_COMMITTER_DATE=$(git for-each-ref --format="%(taggerdate:rfc)" refs/tags/${TAG_NAME})
    export GIT_COMMITTER_NAME=$(git for-each-ref --format="%(taggername)" refs/tags/${TAG_NAME})
    export GIT_COMMITTER_EMAIL=$(git for-each-ref --format="%(taggeremail)" refs/tags/${TAG_NAME})
    export GIT_AUTHOR_DATE=$GIT_COMMITTER_DATE
    export GIT_AUTHOR_NAME=$GIT_COMMITTER_NAME
    export GIT_AUTHOR_EMAIL=$GIT_COMMITTER_EMAIL

     # add the new tag
     git -c user.name="${GIT_COMMITTER_NAME}" -c user.email="${GIT_COMMITTER_EMAIL}" tag -f -a "${TAG_NAME}" -m "${TAG_MESSAGE}" $NEW_COMMIT

    IFS='
'
done

NOTE that for this to work, all tags must be contained in the master branch (or whichever branch you are currently on. If this is not the case, you need to create individual files with the tags of each branch and run this script repeatedly on the specific file while being on the corresponding branch.

NOTE2: This does assume that the COMMITTER_DATE of the tag will fit in at the place it is being added. I am not sure if there are edge cases where one would have to double-check the committer date similarly to how we do it above…

Since your tags are all fixed, now would be a good time to checkout some of them and verify that your software builds, passes its unit checks et cetera. Remember that in contrast to git subtree you need to manually reset your submodule via submodule update after you checkout the tag if you actually want the submodule’s revision that belonged to the tag (which is the whole point of our exercise).

Cleanup

Ok, so the new branches and tags are in place, but the folder is even bigger than before :o Now we get to clean up.

First make sure that absolutely nothing references the old stuff, i.e. delete all tags that you did not change in the previous step with git tag -d. Also make sure that you have no remotes set. If in doubt, open a git-gui like gitk or gitg with the --all parameter and confirm that only your new trees are listed.

Then perform the actual clean-up:

git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d
git reflog expire --expire=now --all
git gc --prune=now --aggressive

To double-check call something like du -c . in your directory and look at the output. You should now see that the big directories are all related to the submodule.

Pushing the changes

Finally we will publish the changes. This is the part that is irreversible. You can take extra pre-cautions by forking your upstream and pushing to the fork first, e.g. if your repository is at https://github.com/alice/myapp make a fork to https://github.com/alicia/myapp or something like that.

To work on alicia:

git remote add alicia git@github.com:alicia/myapp.git

In any case you need to delete all tags that are not in your local repository anymore. If you decided to get rid of some branches locally, also remove them on the remote. You can do this via the command line or github’s interface (or gitlab or whatever). The command line for removing a remote tag is:

git push alicia :tagname

Also backup your releases, i.e. save the release messages and any downloads you added somewhere (I don’t have an automatic way for that).

Then force push, including the updated tags:

git push --force --tags alicia master

Repeat the last step for every branch and look at your results! Do a fresh clone of the remote somewhere to verify that everything is right. Github should have rewired your releases, to the updated tags, but if something went wrong, you can fix it through the web interface.

That was easy, right? :-)

If you have any ideas how to simplify this, please feel free to comment (FSFE account required) or reply on Twitter.

Sunday, 24 July 2016

One Liberated Laptop

Elena ``of Valhalla'' | 18:35, Sunday, 24 July 2016

One Liberated Laptop

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/5a480cd2d5842101fc8975d927d030f3

After many days of failed attempts, yesterday @Diego Roversi finally managed to setup SPI on the BeagleBone White¹, and that means that today at our home it was Laptop Liberation Day!

We took the spare X200, opened it, found the point we were on in the tutorial installing libreboot on x200 https://libreboot.org/docs/install/x200_external.html, connected all of the proper cables on the clip³ and did some reading tests of the original bios.

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/77e61745d9c43833b7c0a4a919d17222

While the tutorial mentioned a very conservative setting (512kHz), just for fun we tried to read it at different speed and all results up to 16384 kHz were equal, with the first failure at 32784 kHz, so we settled on using 8192 kHz.

Then it was time to customize our libreboot image with the right MAC address, and that's when we realized that the sheet of paper where we had written it down the last time had been put in a safe place… somewhere…

Luckily we also had taken a picture, and that was easier to find, so we checked the keyboard map², followed the instructions to customize the image https://libreboot.org/docs/hcl/gm45_remove_me.html#ich9gen, flashed the chip, partially reassembled the laptop, started it up and… a black screen, some fan noise and nothing else.

We tried to reflash the chip (nothing was changed), tried the us keyboard image, in case it was the better tested one (same results) and reflashed the original bios, just to check that the laptop was still working (it was).

It was lunchtime, so we stopped our attempts. As soon as we started eating, however, we realized that this laptop came with 3GB of RAM, and that surely meant "no matching pairs of RAM", so just after lunch we reflashed the first image, removed one dimm, rebooted and finally saw a gnu-hugging penguin!

We then tried booting some random live usb key https://tails.boum.org/ we had around (failed the first time, worked the second and further one with no changes), and then proceeded to install Debian.

Running the installer required some attempts and a bit of duckduckgoing: parsing the isolinux / grub configurations from the libreboot menu didn't work, but in the end it was as easy as going to the command line and running:


linux (usb0)/install.amd/vmlinuz
initrd (usb0)/install.amd/initrd.gz
boot



From there on, it was the usual debian installation and a well know environment, and there were no surprises. I've noticed that <key>grub-coreboot</key> is not installed (<key>grub-pc</key> is) and I want to investigate a bit, but rebooting worked out of the box with no issue.

Next step will be liberating my own X200 laptop, and then if you are around the @Gruppo Linux Como area and need a 16 pin clip let us know and we may bring everything to one of the LUG meetings⁴

¹ yes, white, and most of the instructions on the interwebz talk about the black, which is extremely similar to the white… except where it isn't

² wait? there are keyboard maps? doesn't everybody just use the us one regardless of what is printed on the keys? Do I *live* with somebody who doesn't? :D

³ the breadboard in the picture is only there for the power supply, the chip on it is a cheap SPI flash used to test SPI on the bone without risking the laptop :)

⁴ disclaimer: it worked for us. it may not work on *your* laptop. it may brick it. it may invoke a tentacled monster, it may bind your firstborn son to a life of servitude to some supernatural being. Whatever happens, it's not our fault.

Saturday, 23 July 2016

A Change of Direction

David Boddie - Updates (Full Articles) | 15:04, Saturday, 23 July 2016

My focus has shifted away from my Android explorations, at least for the time being. I'll probably keep tinkering with that, just to keep reminding myself how to generate Android applications, but there are other projects that are perhaps more worthy of attention.

Embedded Open Modular Architecture

One of these is the Earth-friendly EOMA68 Computing Devices effort on Crowd Supply. This aims to bring Libre Computing to a new audience by using a modular, sustainable approach to the hardware design while committing to using software that respects the user's freedom and privacy. I'm supporting the campaign by pledging for two desktop systems since I believe that we will only get the systems we want if we are prepared to support efforts like this. I also know that Luke, the developer behind the project, will try his very best to make it succeed.


Images of the first CPU card and its housing from the EOMA68 Crowd Supply campaign page.

Another benefit of using EOMA as the basis for this effort is the possibility that the CPU card can be reused in different machines, and that those machines can have different form factors. A lot of excitement around this project is due to the 3D-printed laptop that serves as the flagship device, but it doesn't take much imagination to realise that a swappable CPU card could be useful for more mundane situations involving workstations and desktops. If you have desktop systems set up in different places and need to continue working when you move between them, you should be able to just take the CPU card with you. I know that some employers like the idea of hot-desking (or whatever it's called now) but that usually involves transporting a heavy laptop around and plugging it into a proprietary docking station in each location. The other advantage is that you could potentially use the card in a laptop when you really need computing on the move, and otherwise plug it into a desktop when you don't.

What Will It Run?

With new hardware comes exciting possibilities. It is interesting to consider what kind of software we might run on the first, ARM-based, EOMA68 CPU card included in the Crowd Supply campaign. There are currently two flavours of GNU/Linux planned for the Allwinner A20 CPU card, Debian and Parabola, but Fedora and Devuan are also in the works. There are many other kinds of operating systems in existence and some of those are available under Free Software licenses. Perhaps some of those could also run on that hardware.

Regardless of which operating systems are bundled with the CPU cards, hopefully others will follow, taking advantage of the standard configuration of the card to fine-tune the software and make the user experience as comfortable and hassle-free as possible. The possibilities of building systems around the EOMA standard may also attract makers and users of devices other than laptops and desktops, such as handheld gaming consoles and mobile phones.

All in all, it's an exciting project to follow. Crowd funding will continue until sometime in August, so there's still time to support the campaign. Even if it's not something you would want to use yourself, chances are that there are people you know who may be interested in one or more of its aims, so be sure to let them know about the campaign.

Thursday, 21 July 2016

Transfer Public Links to Federated Shares

English – Björn Schießle's Weblog | 09:56, Thursday, 21 July 2016

Transform Public Links to Federated Shares

Transform a public link to a federated share

Creating public links and sending them to your friends is a widely used feature of Nextcloud. If the recipient of a public link also has a Nextcloud or ownCloud account he can use the “Add to your Nextcloud” button to mount the content over WebDAV to his server. On a technical level all mounted public links use the same token, the one of the public link, to reference the shared file. This means that as soon as the owner removes the public link all mounts will disappear as well. Additionally, the permissions for public links are limited compared to normal shares, public links can only be shared read-only or read-write. This was the first generation of federated sharing which we introduced back in 2014.

A year later we introduced the possibility to create federated shares directly from the share dialog. This way the owner can control all federated shares individually and use the same permission set as for internal shares. Both from a user perspective and from a technical point of view this lead to two different ways to create and to handle federated shares. With Nextcloud 10 we finally bring them together.

Improvements for the owner

Public Link Converted to a Federated Share

Public link converted to a federated share for bjoern@myNextcloud.net

From Nextcloud 10 on every mounted link share will be converted to a federated share, as long as the recipient also runs Nextcloud 10 or newer. This means that the owner of the file will see all the users who mounted his public link. He can remove the share for individual users or adjust the permissions. For each share the whole set of permissions can be used like “edit”, “re-share” and in case of folder additionally “create” and “delete”. If the owner removes the original public link or if it expires all federated shares, created by the public link will still continue to work. For older installations of Nextcloud and for all ownCloud versions the server will fall-back to the old behavior.

Improvements for the user who mounts a public link

After opening a public link the user can convert a public link to a federated share by adding his Federated Cloud ID or his Nextcloud URL

After opening a public link the user can convert it to a federated share by adding his Federated Cloud ID or his Nextcloud URL

Users who receive a public link and want to mount it to their own Nextcloud have two options. They can use this feature as before and enter the URL to their Nextcloud to the “Add to your Nextcloud” field. In this case they will be re-directed to their Nextcloud, have to login and confirm the mount request. The owners Nextcloud will then send the user a federated share which he has to accept. It can happen that the user needs to refresh his browser window to see the notification.
Additionally there is a new and faster way to add a public link to your Nextcloud. Instead of entering the URL to the “Add to your Nextcloud” field you can directly enter your federated cloud ID. This way the owners Nextcloud will send the federated share directly to you and redirect you to your server. You will see a notification about the new incoming share and can accept it. Now the user also benefit from the new possibilities of the owner. The owner can give him more fine grained permissions and from the users point of view even more important, he will not lose his mount if the public link gets removed or expires.

Nextcloud 10 introduces another improvement in the federation area: If you re-share a federated share to a third server, a direct connection between the first and the third server will be created now so that the owner of the files can see and control the share. This also improves performance and the potential error rate significantly, avoiding having to go through multiple servers in between.

Wednesday, 20 July 2016

How many mobile phone accounts will be hijacked this summer?

DanielPocock.com - fsfe | 17:48, Wednesday, 20 July 2016

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.


Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Friday, 15 July 2016

On carrots and sticks: 5G Manifesto

polina's blog | 12:16, Friday, 15 July 2016


In the beginning of May 2016, FSFE together with 72 organisations supported strong net neutrality rules in the joint letter addressed to the EU telecom regulators. The Body of European Regulators of Electronic Communication (BEREC) is currently negotiating the creation of guidelines to implement the recently adopted EU Regulation 2015/2120 on open internet access.

In the joint letter, we together with other civil society organisations urged BEREC and the national agencies to respect the Regulation’s goal to “ensure the continued functioning of the internet ecosystem as an engine of innovation”, respecting the Charter of Fundamental Rights of the EU.

However, on 7 July the European Commission endorsed and welcomed another point of view, presented by the 17 biggest EU Internet Service Providers (ISP) who oppose the idea of strong net neutrality rules. In the so-called “5G Manifesto”, the coalition of ISP states the following:

“we must highlight the danger of restrictive Net Neutrality rules, in the context of 5G technologies, business applications and beyond”

“The EU and Member States must reconcile the need for Open Internet with pragmatic rules that foster innovation. The current Net Neutrality guidelines, as put forward by BEREC, create significant uncertainties around 5G return on investment”

Stick

According to the coalition, the Net Neutrality guidelines are “too prescriptive” and as such do not meet the demand of the market and rapid developments within. The coalition is calling the Commission to “take a positive stance on innovation and stick to it”, by allowing network discrimination under the term “network slicing”.

EDRi, one of the leading campaigners for Net Neutrality and a co-signer of the aforementioned letter to BEREC, has strongly criticised the “5G Manifesto”, stating that it includes “absurd threat not to invest in profitable new technologies”.

The Commission is clearly not seeing the real implications of endorsing such policies on the innovation, especially in the digital sector. Furthermore, the Manifesto is indeed imposing threats on the EC: net neutrality vs fast connection – no middle ground. The Manifesto is arguing for “network slicing” justifying the discrimination for public safety services.

Existing rules on neutrality do allow traffic management in ‘special cases’: Article 3(3) of the EU Regulation 2015/2120 does not preclude internet access services from implementing reasonable traffic management measures that are transparent, non-discriminatory and proportionate, and based on objectively different technical quality of service requirements of specific categories of traffic. While Article 3(5) governs so-called specialised services (i.e. “other than internet access services”) that ISP are free to offer. It is difficult to see how these provisions might exclude public safety considerations if they’re “objectively” different from the technical quality perspective or need to be offered outside of open internet. At the same time it is easy to see why ISP would want to achieve that special status by trying to get this exception as broad as possible.

What BEREC is expected to do is to fill the gaps in the legislation by clarifying the implementation of the law, and to not create new rules. What the Commission is expected to do is to “stick” to its existing primary law, including the one on open internet access, and the protection of fundamental rights and freedoms. The latter includes the freedom to conduct business, but it does not include the right to maximise its profits at expense of others.

Carrot

What do telcos promise in return? Telcos promise to invest into 5G. Such promise might be luring for the Commission, as the Commission calls it [“the most critical building block of the digital society”. The argument of net neutrality slowing down the internet is not a new one, and the 5G Manifesto might have hit the Commission’s tender spot. What is necessary to acknowledge, is that internet has been operating based on openness since its nascence and all the legislators need to do is to safeguard that openness in order to inter alia finally achieve the desired 5G. Internet won’t stop evolving because a part of service providers want to slice the cake according to their needs.

Net neutrality and open internet is not a new formula created by legislators in Brussels: it’s the basic, fundamental quality of internet that needs to be preserved to secure further development and future innovations. In conclusion, the EU will only need one “stick” to deliver carrots to everyone: to stick to support open internet for everyone.

The image is licensed under CC BY 3.0 US, Attribution: Luis Prado, from The Noun Project

Tuesday, 12 July 2016

Ανοιχτά Δεδομένα και ΟΑΣΑ

nikos.roussos - opensource | 01:51, Tuesday, 12 July 2016

OASA map

Ως τακτικός χρήστης των μέσων μαζικής μεταφοράς (ΜΜΜ) της Αθήνας επισκέπτομαι συχνά το site του ΟΑΣΑ για να δω τις διαθέσιμες πληροφορίες, ειδικά αν θέλω να χρησιμοποιήσω μια γραμμή με την οποία δεν είμαι εξοικειωμένος, όπως π.χ. τη γραμμή 227. Το link είναι απ' το Wayback Machine του Internet Archive project γιατί αυτή η δυνατότητα έχει πλέον εξαφανιστεί απ' το site του ΟΑΣΑ και στη θέση της υπάρχει παραπομπή για το Google Transit.

Το Google Transit είναι sub-project του Google Maps και η υπηρεσία προσφέρεται μεν δωρεάν, δεν παύει όμως να είναι μια εμπορική υπηρεσία μιας for-profit εταιρίας με συγκεκριμένους όρους χρήσης τόσο για την υπηρεσία όσο και για τα δεδομένα. Όπως όλες οι υπηρεσίες της Google, λειτουργεί ως πλατφόρμα διανομής διαφημίσεων.

Συνειδητά εδώ και πολλά χρόνια έχω σταματήσει να χρησιμοποιώ Google Maps και χρησιμοποιώ OpenStreetMap (και εφαρμογές που βασίζονται σ' αυτό) για πολλούς λόγους. Κυρίως για τους ίδιους λόγους που προτιμώ να διαβάσω ένα λήμμα στη Wikipedia και όχι στη Britanica. Θεωρώ συνεπώς απαράδεκτο ένας δημόσιος (ακόμα) οργανισμός να με ωθεί να χρησιμοποιήσω μια εμπορική υπηρεσία για να έχω πρόσβαση στα δεδομένα που έχω ήδη πληρώσει για να παραχθούν. Σήμερα έστειλα το παρακάτω email στον ΟΑΣΑ:

Καλησπέρα,

Τους τελευταίους μήνες έχουν εξαφανιστεί απ' το site σας (oasa.gr) οι πληροφορίες (στάσεις, δρομολόγια, χάρτες) για όλες τις γραμμές λεωφορείων και τρόλεϊ. Αντ' αυτού η σχετική σελίδα παραπέμπει σε μια εμπορική υπηρεσία (Google Transit).

Ως πολίτης θα ήθελα να μάθω:

  1. Πώς μπορώ να βρω μέσα απ' το site σας τις σχετικές πληροφορίες, χωρίς να χρειαστεί να χρησιμοποιήσω εμπορικές υπηρεσίες (Google Maps, Here Maps, κλπ);

  2. Στη σελίδα με τους όρους χρήσης αναφέρεται πως για όλα τα δεδομένα (χάρτες, σχεδιαγράμματα, γραμμές, δρομολόγια, κ.τ.λ.) δεν επιτρέπεται η εμπορική χρήση τους. Σε ποια δεδομένα αναφέρεστε αν αυτά ούτως ή άλλως δεν διατίθενται μέσα απ' το site σας;

  3. Τα δεδομένα προσφέρονται ελεύθερα μέσα απ' το geodata.gov.gr με άδεια "Creative Commons: Attribution" που επιτρέπει την εμπορική χρήση. Τι απ' τα δύο ισχύει τελικά;

  4. Αν όντως δεν επιτρέπεται η εμπορική χρήση των δεδομένων, που υπάρχει αναρτημένη η συμφωνία που έχετε κάνει με τη Google και ποιο είναι το οικονομικό όφελος για τον οργανισμό;

Δεν ξέρω αν θα λάβω κάποια ουσιαστική απάντηση ή αν θα λάβω οποιαδήποτε απάντηση, αλλά το γεγονός παραμένει εξοργιστικό. Ειδικά αν αναλογιστούμε πως ο ΟΑΣΑ είχε τέτοια υπηρεσία σε λειτουργία απ' το 2011 και προτίμησε να την κλείσει, ενώ παράλληλα προωθεί μια εφαρμογή για κινητά τηλέφωνα που σε μεγάλο βαθμό υλοποιεί τις απούσες απ' το site του υπηρεσίες, αδιαφορώντας για τους πολίτες που δεν διαθέτουν smartphone.

Η άποψη μου είναι απλή. Δεδομένα και λογισμικό που παράγονται και υλοποιούνται με δημόσιο χρήμα πρέπει να είναι και δημόσιο κτήμα. Αυτό σημαίνει πως οι πολίτες δεν θα πρέπει να είναι υποχρεωμένοι να χρησιμοποιήσουν εμπορικές υπηρεσίες για να έχουν πρόσβαση σε δεδομένα δημοσίων υπηρεσιών, ούτε θα πρέπει να "περάσουν" μέσα από app stores συγκεκριμένων εταιριών για να κατεβάσουν την εφαρμογή μιας δημόσιας υπηρεσίας στο κινητό τους. Για τους ίδιους λόγους οι εφαρμογές αυτές θα πρέπει να προσφέρονται ως Ελεύθερο Λογισμικό και ο κώδικας τους να είναι ανοιχτός, καθώς δαπανήθηκε δημόσιο χρήμα.

Ο ΟΑΣΑ στη συγκεκριμένη περίπτωση καταπάτησε οποιαδήποτε έννοια "δημόσιου" αγαθού με την ευνοϊκή μεταχείριση μιας εταιρίας (Google), προσφέροντας της δωρεάν δεδομένα και διαφήμιση, και την ταυτόχρονη απαγόρευση της εμπορικής εκμετάλλευσης των δεδομένων από ανταγωνιστές της.


Σχόλια και αντιδράσεις σε Diaspora, Twitter, Facebook

Monday, 11 July 2016

EC: Free Software to enhance cybersecurity

polina's blog | 15:38, Monday, 11 July 2016

On 5 July, the European Commission signed a contractual arrangement on a public-private partnership (PPP) for cybersecurity industrial research and innovation between the European Union, and a newly-established European Cyber Security Organisation (ECSO). The latter is supposed to represent a wide variety of stakeholders such as large companies, start-ups, research centres, universities, clusters and association as well as European Member State’s local, regional and national administrations. The partnership is supposed to trigger €1.8 billion of investment by 2020 under Horizon2020 initiative, in which the EU allocates he total budget up to EUR 450 million.

In the accompanying communication, the Commission identifies the importance of collaborative efforts in the area of cybersecurity, the transparency and information sharing. In regard to information sharing, the Commission acknowledged the difficulties amongst the businesses to share information about cyberthreats with their peers or authorities in the fear of possible liability for the breach of confidentiality. In this regard the Commission intends to set up anonymous information exchange in order to facilitate such intelligence exchange.

In addition, the Commission stressed “the lack of interoperable solutions (technical standards), practices (process standards) and EU-wide mechanisms of certification” that are affecting the single market in cybersecurity. There is no doubt that such concerns can be significantly decreased by using Free Software as much as possible. The security advantages of Free Software have also amongst the others been previously recognised by the European Parliament in its own-initiative report on Digital Single Market. Therefore, within the anticipated establishment of the PPP for cybersecurity (cPPP), the Commission highlights that:

In this context, the development of open source software and open standards can help foster trust, transparency and disruptive innovation, and should therefore also be a part of the investment made in this cPPP.

The newly established ECSO, whose role is to support the cPPP, is currently calling out for members in different groups. It is currently unclear how the membership will be divided between these groups, however the stakeholders’ platform is intended to be mostly industry-led.

We hope that the Commission will in practice uphold its plans to include Free Software communities into standardisation processes as has been indicated in several documents throughout the whole Digital Single Market initiative, including but not limited to the area of cybersecurity.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English―mina86.com  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog