Planet Fellowship (en)
Sunday, 23 October 2016
Paul Boddie's Free Software-related blog » English | 22:16, Sunday, 23 October 2016
In the context of a fairly recent discussion of Free Software licence enforcement on the Linux Kernel Summit mailing list, where Matthew Garrett defended the right of users to enjoy the four freedoms upheld by the GPL, but where Linus Torvalds and others insisted that upstream corporate contributions are more important and that it doesn’t matter if the users get to see the source code, Jonas Öberg made the remarkable claim that…
So, if we are to understand this correctly, a highly-privileged and famous software developer, whose position on the “tivoization” of hardware was that users shouldn’t expect to have any control over the software running on their purchases, is now seemingly echoing the sentiments of a billionaire monopolist who once said that users didn’t need to see the source code of the programs they use. That particular monopolist stated that the developers of his company’s software would take care of everything and that the users would “rely on us” because the mere notion of anybody else interacting with the source code was apparently “the opposite of what’s supposed to go on”.
Here, this famous software developer’s message is that corporations may in future enrich his and his colleagues’ work so that a device purchaser may, at some unspecified time in the future, get to enjoy a properly-maintained version of the Linux kernel running inside a purchase of theirs. All the purchaser needs to do is to stop agitating for their four freedom rights and instead effectively rely on them to look after everything. (Where “them” would be the upstream kernel development community incorporating supposedly-cooperative corporate representatives.)
Now, note once again that such kernel code would only appear in some future product, not in the now-obsolete or broken product that the purchaser currently has. So far, the purchaser may be without any proper insight into that product – apart from the dubious consolation of knowing that the vendor likes Linux enough to have embedded it in the product – and they may well be left without any control over what software the product actually ends up running. So much for relying on “them” to look after the pressing present-day needs of users.
And even with any mythical future product unboxed and powered by a more official form of Linux, the message from the vendor may very well be that at no point should the purchaser ever feel entitled to look inside the device at the code, to try and touch it, to modify it, improve or fix it, and they should absolutely not try and use that device as a way of learning about computing, just as the famous developer and his colleagues were able to do when they got their start in the industry. So much for relying on “them” to look after the future needs of users.
(And let us not even consider that a bunch of other code delivered in a product may end up violating other projects’ licences because those projects did not realise that they had to “make friends” with the same group of dysfunctional corporations.)
Somehow, I rather feel that Matthew Garrett is the one with more of an understanding of what it is like to be among the 99%: where you buy something that could potentially be insecure junk as soon as it is unboxed, where the vendor might even arrogantly declare that the licensing does not apply to them. And he certainly has an understanding of what the 99% actually want: to be able to do something about such things right now, rather than to be at the mercy of lazy, greedy and/or predatory corporate practices; to finally get the product with all the features you thought your money had managed to buy you in the first place.
All of this ground-level familiarity seems very much in contrast to that of some other people who presumably only “hear” via second- or third-hand accounts what the average user or purchaser supposedly experiences, whose privilege and connections will probably get “them” what they want or need without any trouble at all. Let us say that in this dispute Matthew Garrett is not the one suffering from what might be regarded as “benevolent dictator syndrome”.
The Misrepresentation of Others
And one thing Jonas managed to get taken in by was the despicable and continued misrepresentation of organisations like the Software Freedom Conservancy, their staff, and their activities. Despite the public record showing otherwise, certain participants in the discussion were only too happy to perpetuate the myth of such organisations being litigious, and to belittle those organisations’ work, in order to justify their own hostile and abusive tone towards decent, helpful and good people.
No-one has ever really been forced to choose between cooperation, encouragement, community-building and the pursuit of enforcement. Indeed, the organisations pursuing responsible enforcement strategies, in reminding people of their responsibilities, are actually encouraging companies to honour licences and to respect the people who chose such licences for their works. The aim is ultimately to integrate today’s licence violators into the community of tomorrow as honest, respectable and respectful participants.
Community-building can therefore occur even when pointing out to people what they have been doing wrong. But without any substance, licences would provide only limited powers in persuading companies to do the right thing. And the substance of licences is rooted in their legal standing, meaning that occasionally a licence-violating entity might need to be reminded that its behaviour may be scrutinised in a legal forum and that the company involved may experience negative financial and commercial effects as a result.
Reminding others that licences have substance and requiring others to observe such licences is not “force”, at least not the kind of illegitimate force that is insinuated by various factions who prefer the current situation of widespread licence violation and lip service to the Linux brand. It is the instrument through which the authors of Free Software works can be heard and respected when all other reasonable channels of redress have been shut down. And, crucially, it is the instrument through which the needs of the end-user, the purchaser, the people who do no development at all – indeed, all of the people who paid good money and who actually funded the product making use of the Free Software at risk, whose money should also be funding the development of that software – can be heard and respected, too.
I always thought that “the 1%” were the people who had “got theirs” already, the privileged, the people who promise the betterment of everybody else’s lives through things like trickle-down economics, the people who want everything to go through them so that they get to say who benefits or not. If pandering to well-financed entities for some hypothetical future pay-off while they conduct business as usual at everybody else’s expense is “for the benefit of the 99%”, then it seems to me that Jonas has “the 1%” and “the 99%” the wrong way round.
Saturday, 22 October 2016
Planet Fsfe on Iain R. Learmonth | 18:15, Saturday, 22 October 2016
As I posted yesterday, we released PATHspider 1.0.0. What I didn’t talk about in that post was an event that occured only a few hours before the release.
Everything was going fine, proofreading of the documentation was in progress,
git push with the documentation updates and… CI FAILED!?! Our
CI doesn’t build the documentation, only tests the core code. I’m planning to
release real soon and something has broken.
Starting to panic.
irl@orbiter# ./tests.sh ................ ---------------------------------------------------------------------- Ran 16 tests in 0.984s OK
This makes no sense. Maybe I forgot to add a dependency and it’s been broken for a while? I scrutinise the dependencies list and it all looks fine.
In fairness, probably the first thing I should have done is look at the build log in Jenkins, but I’ve never had a failure that I couldn’t reproduce locally before.
It was at this point that I realised there was something screwy going on. A sigh of relief as I realise that there’s not a catastrophic test failure but now it looks like maybe there’s a problem with the University research group network, which is arguably worse.
Being focussed on getting the release ready, I didn’t realise that the Internet
Unknown to me, a massive DDoS attack against Dyn, a major DNS host, was in
progress. After a few attempts to debug the problem, I hardcoded a line into
/etc/hosts, still believing it to be a localised issue.
I’ve just removed this line as the problem seems to have resolved itself for now. There are two main points I’ve taken away from this:
- CI failure doesn’t necessarily mean that your code is broken, it can also indicate that your CI infrastructure is broken.
- Decentralised internetwork routing is pretty worthless when the centralised name system goes down.
This afternoon I read a post by [tj] on the 57North Planet, and this is where I learnt what had really happened. He mentions multicast DNS and Namecoin as distributed name system alternatives. I’d like to add some more to that list:
Only the first of these is really a distributed solution.
My idea with ICMP Domain Name Messages is that you send an ICMP message to a webserver. Somewhere along the path, you’ll hit either a surveillance or censorship middlebox. These middleboxes can provide value by caching any DNS replies that are seen so that an ICMP DNS request message will cause the message to not be forwarded but a reply is generated to provide the answer to the query. If the middlebox cannot generate a reply, it can still forward it to other surveillance and censorship boxes.
I think this would be a great secondary use for the NSA and GCHQ boxen on the Internet, clearly fits within the scope of “defending national security” as if DNS is down the Internet is kinda dead, plus it’d make it nice and easy to find the boxes with PATHspider.
Friday, 21 October 2016
Planet Fsfe on Iain R. Learmonth | 23:46, Friday, 21 October 2016
In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.
For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.
PATHspider is a framework for performing and analyzing these measurements, while the actual A/B test can be easily customized. Late on the 21st October, we released version 1.0.0 of PATHspider and it’s ready for “production” use (whatever that means for Internet research software).
Our first real release of PATHspider was version 0.9.0 just in time for the presentation of PATHspider at the 2016 Applied Networking Research Workshop co-located with IETF 96 in Berlin earlier this year. Since this release we have made a lot of changes and I’ll talk about some of the highlights here (in no particular order):
Switch from twisted.plugin to straight.plugin
While we anticipate that some plugins may wish to use some features of Twisted, we didn’t want to have Twisted as a core dependency for PATHspider. We found that straight.plugin was not just a drop-in replacement but it simplified the way in which 3rd-party plugins could be developed and it was worth the effort for that alone.
Library functions for the Observer
PATHspider has an embedded flow-meter (think something like NetFlow but highly customisable). We found that even with the small selection of plugins that we had we were duplicating code across plugins for these customisations of the flow-meter. In this release we now provide library functions for common needs such as identifying TCP 3-way handshake completions or identifying ICMP Unreachable messages for flows.
New plugin: DSCP
We’ve added a new plugin for this release to detect breakage when using DiffServ code points to achieve differentiated services within a network.
Plugins are now subcommands
Using the subparsers feature of argparse, all plugins including 3rd-party plugins will now appear as subcommands to the PATHspider command. This makes every plugin a first-class citizen and makes PATHspider truly generalised.
We have an added benefit from this that plugins can also ask for extra arguments that are specific to the needs of the plugin, for example the DSCP plugin allows the user to select which code point to send for the experimental test.
PATHspider now has a test suite! As the size of the PATHspider code base grows we need to be able to make changes and have confidence that we are not breaking code that another module relies on. We have so far only achieved 54% coverage of the codebase but we hope to improve this for the next release. We have focussed on the critical portions of data collection to ensure that all the results collected by PATHspider during experiments is valid.
DNS Resolver Utility
Back when PATHspider was known as ECNSpider, it had a utility for resolving IP addresses from the Alexa top 1 million list. This utility has now been fully integrated into PATHspider and appears as a plugin to allow for repeated experiments against the same IP addresses even if the DNS resolver would have returned a different addresss.
Documentation is definitely not my favourite activity, but it has to be done. PATHspider 1.0.0 now ships with documentation covering commandline usage, input/output formats and development of new plugins.
If you’d like to check out PATHspider, you can find the website at https://pathspider.net/.
Debian packages will be appearing shortly and will find their way into stable-backports within the next 2 weeks (hopefully).
Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.
Thursday, 20 October 2016
DanielPocock.com - fsfe | 07:25, Thursday, 20 October 2016
Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.
If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.
Choice of smart card
For standard PGP use, the OpenPGP card provides a good choice.
For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.
Choice of card reader
The technical factors to consider are most easily explained with a table:
|On disk||Smartcard reader without PIN-pad||Smartcard reader with PIN-pad|
|Software||Free/open||Mostly free/open, Proprietary firmware in reader|
|Key extraction||Possible||Not generally possible|
|Passphrase compromise attack vectors||Hardware or software keyloggers, phishing, user error (unsophisticated attackers)||Exploiting firmware bugs over USB (only sophisticated attackers)|
|Other factors||No hardware||Small, USB key form-factor||Largest form factor|
Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.
Choice of computer to run the clean room environment
There are a wide array of devices to choose from. Here are some principles that come to mind:
- Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
- Even better if there is no wired networking either
- Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
- Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
- No hard disks required
- Having built-in SD card readers or the ability to add them easily
SD cards and SD card readers
The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.
It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.
For convenience, it would be desirable to use a multi-card reader:
although the software experience will be much the same if lots of individual card readers or USB flash drives are used.
One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.
Can you help with ideas or donations?
If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.
Tuesday, 18 October 2016
André on Free Software » English | 19:19, Tuesday, 18 October 2016
From 2 to 4 September I’ve been in Free Berlin to participate in the first FSFE Summit and in the 15th anniversary celebration. Thanks to FSFE I’ve met interesting people, discovered surprising technologies and heard inspiring talks from people of all walks of life. It was an honour to speak about translating for Free Software.
In the BCC I attended the following:
- Keynote 1: How social activists are using open data
- FSFE Opening
- Hitobito – a Free Software Community Solution for Everybody
- Strategic use of Free Software at Siemens
- How to support activists using Free Software
- The POWER is open. How Open POWER is changing the game
- Free and open source Software in Europe: policies and implementations
- Keynote 2: Cultivating Empathy
- FSFE’s edu-team: past, challenges and future
- How to write the perfect press release
- Group photo
- Free Software 1+1: Explaining typical misunderstandings
- Advocacy for Free Software – how do we do it?
- FSFE 15 years celebration
- FSFE Wiki Caretakers
- Freedomvote: Bring Ask Your Candidates to the web
- Plussy Display: An FSFE Booth Attractor
- Free Software and the Network Effect: fight it or ride it?
- SHOW EUROPE
- After the summit is before the summit?
- FSFE Community
- Keynote 3 – Software as a public service
Let’s enable people to control technology in their own language
Cryptie and I gave a talk about translating for Free Software, titled “Let’s enable people to control technology in their own language“.
FSFE Birthday party
FSFE had it’s 15th birthday party in c-base, which ensured the event to be future compatible. With other members of the movement I was declared a “FSFE local hero”, for which I’m very thankful to the FSFE.
With special thanks to Erik Albers and Cellini Bedi, who used their skills to organise a very positive, inspiring and memorable experience.
Sunday, 16 October 2016
David Boddie - Updates (Full Articles) | 21:14, Sunday, 16 October 2016
In my previously article about Inferno, I mentioned that I was planning to write some notes about installing and using Inferno in a Wiki attached to a fork/clone of the official Mercurial repository for Inferno. I got a bit distracted by other projects and felt that I couldn't write about this until I'd managed to get those projects out of the way. Now I've found the time to tidy up the scripts I wrote and write one or two things about them. The aim has been to create a hard disk image containing a suitable kernel that will boot up successfully and set up a user environment, ideally graphical but a shell is enough to begin with.
Creating a Bootable Kernel
I began with Brian Stuart's How to Build a Native Inferno Kernel for the PC, which describes how to build a kernel for booting from a floppy disk image. Really, I wanted to build a hard disk image, but this was a good place to start. I wrote some instructions based on the above guide, but using some helper scripts of my own that I put in the tools directory of the Wiki repository.
Building a kernel is quite straightforward once you realise that the process is done from within the os/pc subdirectory of the inferno-os distribution. Since Inferno comes from the branch of the Unix family tree where it is customary to build your own compilers as part of building your operating system, actually compiling a kernel is a fairly smooth process. Of course, we are relying on the GCC toolchain to bootstrap this process, which also works smoothly.
Installing a Bootable Kernel
Brian Stuart wrote another guide, Installing Native Inferno on a PC, which describes how to install a pre-built disk image from a CD-based installer. I considered building a bootable CD like this but that looked like a long diversion involving tools I had no experience with. I wondered why an installer like this was needed — my plan was to try and do everything that it would do in the hosted environment that you can run under GNU/Linux. At least, I wanted to be able to create a disk image with partitions, format the file systems and copy a pre-built kernel into it. However, I found that the disk formatting and partitioning tools behave differently in the hosted environment with disk image files to the way they behave on a real system (or one running in qemu) with real disks to manipulate.
One potential solution seemed to be to partition and format a disk image under Linux and try and mount the partitions in the hosted environment but I failed to find a way to mount a file as a disk image using an offset into the file – the equivalent to mount with the offset option under Linux. In addition, while I could create a FAT partition under Linux and copy the kernel into it, a difference in the format mkfs.vfat generates and the one expected by the bootloader used for Plan 9 and Inferno meant that the kernel was never loaded. So, it seemed easiest to build the kernel under Linux, handle the disk partitioning and formatting under Inferno running in qemu, and somehow copy the files into the ready-made partitions.
I briefly experimented with creating a second hard disk image so that Inferno in qemu could just copy files from there into the installation image but that didn't go as planned – the supported file systems either couldn't handle large partitions (FAT16) or had other issues.
In the end, given the problems with mounting sections of disk images in hosted Inferno, the installation scripts do some chopping up and stitching together of disk images. I'm not very satisfied with this solution but it seems to work. The installation process now looks roughly like this:
- In Linux:
- Build a hosted environment. This gives us the Inferno compiler suite that we run under Linux and tools that we use in the hosted environment.
- Build a minimal kernel and running environment for the PC.
- Boot the kernel in qemu, supplying a hard disk image so that a script in the running environment can install a master boot record, partition the disk and format the FAT and kfs subpartitions. Exit qemu.
- Extract the FAT and kfs subpartitions from the disk image and copy them into the hosted environment where they can be manipulated.
- Build a kernel with more features for the final installation and copy it into the hosted environment.
- In the hosted Inferno environment:
- Mount the subpartition image files.
- Copy the fully-featured kernel into the FAT subpartion.
- Install all the standard files from an Inferno distribution into the kfs subpartition.
- In Linux:
- Stitch the subpartitions back together to make a final disk image.
- Boot the disk image in qemu and test it.
I've put some basic instructions for creating a fairly simple disk image in the Wiki mentioned above. In theory, it should build a 256 MB disk image containing an 32-bit x86 kernel and user space, ready to boot in qemu. I dug around in the configuration files and enabled support for the rtl8139 network card so that the installation can potentially do something useful, but couldn't get VGA support working. That's something that will have to wait for a future revision of the tools.
I wandered away from Inferno for a while and came back to publish these scripts. I'm interested in seeing if it's possible to bring up a natively-running desktop environment if I can get some hints about how to initialise VGA support properly. Since Plan 9's desktop can be run on native hardware, it would be a shame if Inferno couldn't do the same. In the meantime there are always other projects to look into.
Category: Free Software
Norbert Tretkowski | 04:00, Sunday, 16 October 2016
Since upgrading my Nexus 9 tablet to the latest snapshot of CyanogenMod 13.0, rotating the lockscreen did no longer work. Rotating the homescreen worked fine. I found the solution on XDA Developers in the Nexus 6 forum:
Just add these lines to the
After a reboot, rotating the lockscreen works fine again.
Thursday, 13 October 2016
Hook’s Humble Homepage | 07:00, Thursday, 13 October 2016
After many years of simply ignoring and avoiding the option to learn how to drive, a few months ago I gave in and started taking driving lessons.
This morning I finally took my driving exam and passed.
Which means that I can now terrorise both land and sea! Mwahahaha …
capt. hook out → sipping Kusmi’s Tsarevna to catch up on not being able to during the dark, cold mornings I had to drive in the driving school
Tuesday, 11 October 2016
DanielPocock.com - fsfe | 18:54, Tuesday, 11 October 2016
If you are interested in helping as either an intern or mentor, please follow the instructions there to make contact.
Even if you can't participate, if you have the opportunity to promote the topic in a university or any other environment where potential interns will see it, please do so as this makes a big difference to the success of these programs.
DanielPocock.com - fsfe | 18:53, Tuesday, 11 October 2016
If you are interested in helping as either an intern or mentor, please follow the instructions there to make contact.
Even if you can't participate, if you have the opportunity to promote the topic in a university or any other environment where potential interns will see it, please do so as this makes a big difference to the success of these programs.
The project could involve anything related to SIP, XMPP, WebRTC or peer-to-peer real-time communication, as long as it emphasizes a specific feature or benefit for the Debian community. If other Outreachy organizations would also like to have a Free RTC project for their community, then this could also be jointly mentored.
Monday, 10 October 2016
DanielPocock.com - fsfe | 19:25, Monday, 10 October 2016
There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails.
Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian.
Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with.
Trying it out in VirtualBox
It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside:
At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL.
Ready to use today
More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson.
The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline.
To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools.
If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it.
Being Fellow #952 of FSFE » English | 15:12, Monday, 10 October 2016
Yeah, I know it is already more than a month but I wanted to get this out for future reference. Here are some notes from my FSFE summit experience:
I couldn’t come any earlier so I arrived in Berlin on Friday night at about 1:00 (Saturday morning), got some hours of sleep and found my way to the Congress Center about two hours before my own talk. This allowed me to see most of “Cultivating Empathy“, an important talk that did not provide me with too much news, but I left it with the intention to read more fiction again (which I actually did since then!).
The room I had my own talk “FSFE’s edu-team - past, challenges and future” had no recording equipment installed But I can tell you that I had a very interested audience and I talked to a lot of people about Free Software in education afterwards and the next day.
Due to the many conversations after the talk, I missed ”FSFE Legal Team & Legal Network – Yesterday, Today and Tomorrow“.
What I did manage to see was:
- FSFE IT Infrastructure Presentation and Feedback BoF
- Advocacy for Free Software - how we do it?
- Let’s enable people to control technology in their own language
- FSFE 15 YEARS CELEBRATION
- Free software and computer-aided research
- Software Heritage - the Universal Archive of Free Software
- Women in Open Source
- GCompris goes Qt Quick
And then I had to rush to catch my train back home already…
Anyway, it was a great event with lots of nice conversations. Can’t wait until the next summit!
Saturday, 08 October 2016
Norbert Tretkowski | 22:00, Saturday, 08 October 2016
More plugins will follow.
Tuesday, 04 October 2016
Planet Fsfe on Iain R. Learmonth | 16:30, Tuesday, 04 October 2016
Once a month I am involved in running an informal session, loosely affiliated with Open Rights Group and FSFE, called Cryptonoise. Cryptonoise explores methods for protecting your digital rights, with a leaning towards focusing on privacy, and provides a venue for like minded people to meet up and discuss the state of the digital landscape and those that may try to infringe on the rights of digital citizens.
We’ve all made it easy for large enterprises and governments to collect masses of data about our online activities because we perform most of those activities in the same place. Facebook, Google and Twitter spring to mind as examples of companies that have grown to dangerous sizes with little competition. This is not paranoia. This is real. We make it a lot more difficult when we spread out.
Our meetups are held at 57North Hacklab and at the last meetup on the 29th September I set up a GNU Social instance for the members of 57North. GNU Social provides the same functionality as Twitter but as a decentralised federated network.
Federation is a feature that is found in protocols like E-Mail, XMPP and SIP. It doesn’t matter which server you’re using, you can still talk to all the other users on all the other servers. While I’m using social.57north.org.uk I can still follow FSF on status.fsf.org, for example, with no prior coordination with system administrators or anything complicated. It all just works.
People have pointed out that I’ve just introduced another point of centralisation but I don’t see it necessarily as a bad thing. I think too many users in a single service starts to look dangerous but as long as user counts don’t go too high I believe that the benefits of sharing the administrative workload (performing updates, monitoring, keeping the TLS cert current, etc.) far outweigh the effects of having a few extra users. I think 100 is probably about the maximum number I would be comfortable with, although I’ll admit I’ve not based this on anything and it’s chosen arbitrarily.
The jabber.ccc.de server is an example of a service that grew too large. It was set up by members of CCC and made available to all, but ended up becoming the de facto service for hackers. The jabber.ccc.de team have made appeals for others to set up their own servers, and you should. For a great overview and guide for setting up your own real-time commmunication servers, check out the RTC Quick Start Guide.
This work was enabled by Shell who had previously set up a Central Authentication Service, which GNU Social already had a plugin to use as the authentication backend. No one likes to have a whole load of different passwords for different services and integration with this allows for identities to be consistent across the 57North services. She has also setup a Matrix homeserver, another step towards decentralisation and an end of reliance on centralised giants.
If you have an account on a GNU Social instance, you can follow me here.
Monday, 03 October 2016
Being Fellow #952 of FSFE » English | 13:19, Monday, 03 October 2016
Beginning September, there was not only the first FSFE summit in Berlin: Just three days later, our group met in the “Coworking Zentrale” to update the Free Software handouts we adopted to our needs last year from the group in Munich.
It was a very productive meeting and it had to be as we needed current flyers for the Rotlintstraßenfest on the upcoming Saturday.
The spot we were assigned to turned out to be less fortunate as last year as we couldn’t manage to organize electrical power supply to show the FSFE15 video with subtitles in various languages in a loop and to power our outdoor Freifunk router. Well, our mission was to talk to people anyway so this wasn’t really a big deal.
At first, I thought there were not as many conversations with people than last year but then I realized that we had more helpers at the booth which was the main reason why I was able to get something to eat and take short breaks which wasn’t the case last year. Thanks to all involved from working on the hand outs, providing hints and feedback until staffing the booth!
It started as a busy month and it continued to be one. Hence the late and very short report of our activities. A posting about the Summit is also still in the queue… The next meeting is already around the corner (next Wednesday) in Mainz.
Well, that’s it for now. See you next time in Mainz! Further details about our meetings can be found in the Wiki.
Sunday, 02 October 2016
free software - Bits of Freedom | 14:57, Sunday, 02 October 2016
This article will be about OTRS, a ticket system we're using at the FSFE for handling things like swag orders, internship applications and so on. But it could actually be about any software. OTRS just happened to be in the line of fire this time.
This will be an example in how to (not) manage user expectations. You may know the principle of least astonishment, and this will be a typical example of where it fails. The problem is in how a program communicates (or fails to communicate) to the user what it will do based on some input.
The design principle of least astonishment simply means you should aim for designing your software in a way that what the user expects should happen when performing a certain operation, should also happen. If something else happens, that's bad design.
I got foiled twice today by bad design. The first situation was setting up notifications that were supposed to trigger when a ticket (such as a swag order) was updated with new information. The notification system allowed me to configure a Ticket Filter to limit the notification to only some tickets. So I could set to limit the notification only to tickets with a priority of normal.
But wait, what if I have a situation where the ticket priority is changed from low to normal? Does the Ticket Filter consider the existing ticket or the changed ticket? It's not obvious which it is, and even reading the on page instructions or the manual failed to give any clear insight. It took an experiment to determine that it's the latter.
The second situation occurred when setting up a notification which should send an email to the original user (called a customer in OTRS terminology). There's a neat setting in the notification configuration which says to send to Customer of the ticket, so naturally I choose this.
Imagine my surprise when those notifications started coming to me, instead of the customer. Turns out (and I actually needed to read the code to figure this out) that when it says Customer of the ticket, it doesn't actually mean the customer.
Wait, this is confusing. Yes, yes it is. Each ticket has a field called "Customer ID" which in our case is almost always the e-mail address of the original user, the person ordering swag or applying for an internship. So you might think that the Customer of the ticket is the e-mail address listed in the ticket.
Not so. The Customer of the ticket is actually the sender OR the recipient of the last e-mail in the ticket. Consider a ticket containing these three e-mails:
From: Jane User <firstname.lastname@example.org> To: email@example.com Subject: Application for internship I'm applying, see my attachment for details!
From: Office <firstname.lastname@example.org> To: Jane User <email@example.com> Subject: Thank you for your application Thank you for your application, we'll review soon.
From: Office <firstname.lastname@example.org> To: Joe Coworker <email@example.com>, My Assistant <firstname.lastname@example.org> Subject: Re: Application for internship Hey, I think this application looks really good, maybe we should accept Jane?
You might think that the Customer of the ticket is Jane, and the fact that the first notification (an automatic reply) goes to Jane is a clear indication of this. However, in some cases, the third email will cause the behavior to change. The system will look through the e-mails in the ticket, determine that the last e-mail is #3, and since this email is sent from the system address, the recipients of that email (Joe and My) must be the Customer.
So rather than sending a notification to Jane, any future notifications going to the Customer will be sent to Joe and My. Until someone sends an email to Jane, after which any future notifications will again go to Jane.
Triggering this behavior required a few other things to go wrong too, but nothing which I would consider to be strange or out of the ordinary. Settings which seems pretty logical to make, but which in the end, caused a completely different result than what I was lead to believe it would.
Follow the principle of least astonishment and your users will be happy.
Friday, 30 September 2016
Bits from the Basement | 04:23, Friday, 30 September 2016
At the end of August 2012, I announced my Early Retirement from HP. Two years later, my friend and former boss Martin Fink successfully recruited me to return to what later became Hewlett Packard Enterprise, as an HPE Fellow working on open source strategy in his Office of the CTO.
I'm proud of what I was was able to accomplish in the 25 months since then, but recent efforts to "simplify" HPE actually made things complicated for me. Between the announcement in late June that Martin intended to retire himself, and the two major spin-merger announcements involving Enterprise Services and Software... well...
The bottom line is that today, 30 September 2016, is my last day at HPE.
My plan is to "return to retirement" and work on some fun projects with my wife now that we are "empty nesters". I do intend to remain involved in the Free Software and open hardware worlds, but whether that might eventually involve further employment is something I'm going to try and avoid thinking about for a while...
There is a rocket launch scheduled nearby this weekend, after all!
Saturday, 24 September 2016
Planet Fsfe on Iain R. Learmonth | 22:03, Saturday, 24 September 2016
Around a week ago, I started to play with programmatically controlling Azure. I
needed to create and destroy a bunch of VMs over and over again, and this
seemed like something I would want to automate once instead of doing manually
and repeatedly. I started to look into the
and mentioned that I wanted to look into this in
ardumont from Software
Heritage noticed me, and was planning to
We joined forces and started a packaging team for Azure-related
I spoke with the upstream developer of the azure-sdk-for-python and he pointed me towards azure-cli. It looked to me that this fit my use case better than the SDK alone, as it had the high level commands I was looking for.
Between me and ardumont, in the space of just under a week, we have now packaged: python-msrest (#838121), python-msrestazure (#838122), python-azure (#838101), python-azure-storage (#838135), python-adal (#838716), python-applicationinsights (#838717) and finally azure-cli (#838708). Some of these packages are still in the NEW queue at the time I’m writing this, but I don’t foresee any issues with these packages entering unstable.
azure-cli, as we have packaged, is the new Python-based CLI for Azure. The Microsoft developers gave it the tagline of “our next generation multi-platform command line experience for Azure”. In the short time I’ve been using it I’ve been very impressed with it.
In order to set it up initially, you have to configure a couple of of defaults
az configure. After that, you need to
az login which again is an
entirely painless process as long as you have a web browser handy in order to
perform the login.
After those two steps, you’re only two commands away from deploying a Debian virtual machine:
az resource group create -n testgroup -l "West US" az vm create -n testvm -g testgroup --image credativ:Debian:8:latest --authentication-type ssh
This will create a resource group, and then create a VM within that resource group with a user automatically created with your current username and with your SSH public key (~/.ssh/id_rsa.pub) automatically installed. Once it returns you the IP address, you can SSH in straight away.
Looking forward to some next steps for Debian on Azure, I’d like to get
images built for Azure using
vmdebootstrap and I’ll be exploring this in the
lead up to, and at, the upcoming vmdebootstrap
sprint in Cambridge, UK
later in the year (still being organised).
Friday, 23 September 2016
Seravo | 08:42, Friday, 23 September 2016Btrfs is probably the most modern filesystem of all widely used filesystems on Linux. In this article we explain how to use Btrfs as the only filesystem on a server machine, and how that enables some sweet capabilities, like very resilient RAID-1, flexible adding or replacing of disk drives, using snapshots for quick backups and […]
Thursday, 15 September 2016
Florian Snows Blog » en | 06:13, Thursday, 15 September 2016
Just like at FOSDEM, a very important part for me was the social component at the summit. Meeting new people, talking to people I only know from mailing lists, seeing people again that I met at other events, and experiencing a sense of community that goes beyond what I can usually see at the local level, is a great experience and an amazing motivation to keep on going.
So when I arrived on Thursday, the first thing I did was join everyone from the FSFE office in Berlin for lunch. After that, I was allowed to work from the FSFE office a little bit to test a few more things for the blog migration. This is where I remembered that I had forgotten to sign up as a volunteer, but Cellini was kind enough to still let me do that and explain the basics of what to do.
Of course, I also saw some great talks at the summit. Cryptie and André gave a good overview of the new translation system that Julien set up and that is currently going through extensive testing by some translation teams.
There was a very interesting talk about the anthropology of Free Software. When I find the time, I study philosophy (usually just by myself) and it was really nice to be able to ask a question about transcendentalism (Kant and Fichte) and get the impression that not only the speaker understands the question, but also large parts of the audience. Not that a question needs an audience, but it is nice to know other people may have thought about similar ideas because that also strengthens the sense of community.
Christian, also a coordinator of the local group in Franconia, gave a talk about the Plussy booth attractor he designed and built and he had a chance to make one for the group in Frankfurt right after the summit.
The Plussy booth attractor was successful at the FSFE booth at the summit as well. Christian decided it would be nice to use it there and a lot of people came by to ask about it. Helping out at the booth was a really great experience as well. While I am not as good a salesman as some people with more experience (Cryptie), I immensely enjoyed talking to people and explaining Free Software and its terminology.
In between the talks, the BCC served really amazing food that not only looked awesome, but was also incredibly delicious. That is why a small group of us sat together over lunch after the talk about women in Free Software and we discussed how our communities can be more inclusive and how some communities are already doing a good job at that, but others are not as successful. I found it very interesting that in some countries, women are in the majority in IT jobs and the reason behind that appears to be a combination of economic pressure and remnants of the communist idea that women are workers as well and need to be just as productive as men.
On Saturday, the FSFE celebrated it’s 15th anniversary. There was an event at the BCC in which past and present presidents gave an overview of their work and shared personal experiences they had in the last 15 years. Jonas, the executive director of the FSFE also shared some interesting stories of his involvement. After that, we went on to C-Base for the actual celebration with food and drinks. Unfortunately, some people were missing from that celebration. I would have liked it very much if we could all have gotten together for this anniversary and celebrated our achievements. It would have been nice to talk to some of the people who insisted on having a local group in Franconia and speak about current developments in that regard. Sadly, that did not happen but perhaps we can meet again at another Free Software related event. Also at the party, there was an award ceremony [careful, link to non-free video site] in which so-called local heroes were honored for their contribution. It was quite an illustrious crowd with Guido Arnold for his hard work on a local group and in the education team, André Ockers for translating huge chunks of the FSFE website to Dutch, Simon Wächter for his work in Switzerland in general and his specific involvement in the Freedomvote project, Cryptie for her translations and volunteering and numerous booths, and several more people who do important work for the FSFE. I was also part of that group and I felt very honored, but also a bit out of place because even though I know that this was a selection of volunteers because not everyone can be honored, I always feel like I should do more for Free Software than I can get around to. Sometimes, there are even weeks where I don’t do anything and even when I am active, there is always someone else involved and there are very few instances where someone completes a task without help from others. That’s what’s nice about being part of a larger community and so I see this award as a thank you to the whole Free Software community and its many volunteers.
Speaking of volunteering: That was a great experience as well. On Friday, we received a short introduction on how to present guests and how to host a room. The instructions were helpful and from my perspective as a volunteer, everything went smoothly. Of course I know that this was just my impression and for example Cellini might tell you a different story because at any given event of a certain size, there will be some last-minute changes or someone who does not show up or a number of things like that. However, everything was handled perfectly and especially Cellini did a great job of handling it all with a smile. That is also what Erik said about organizing the summit together with her. I am sure the two of them must have been incredibly stressed at times, but they never showed it and their hard work made the summit a great success.
On Sunday night, after we had packed up the booth, we went out for dinner and drinks again, we complained about the terrible user interface of the ordering terminal at our table, and we generally had a lot of fun hanging out and sharing stories in a small group until the bar closed. Overall, I had a great time and as Cryptie put it, recharged my FSFE batteries. Now I am a bit sad that the party is over, but I guess that just makes me look forward more to a potential next summit. If we follow the advice of our communications officer Paul Brown, the next summit could be in a town such as Eberswalde which is close enough to a bigger city to make travel easy for everyone, but small enough that nothing ever happens there and so we would not compete with other events for press coverage. So with this idea in mind, let me say goodbye and see you next time, perhaps at an Eberswalde near you.
Wednesday, 14 September 2016
polina's blog | 15:56, Wednesday, 14 September 2016
In the last week of August 2016, two major leaks of the EU copyright reform became public: draft Impact Assessment, and draft Proposal for a Directive on copyright in the digital single market. FSFE has previously followed the reform on several occasions and provided the comments on Parliamentary own-initiative report, and within the general comments on Digital Single Market strategy.
In our assessments we asked for making clear that no exception to copyright should be ever limited by Digital Restrictions Management (DRM), to provide for a fully harmonised set of exceptions, including for currently uncertain situation with Text and Data Mining (TDM), and to strengthen the principle of technological neutrality.
The draft Proposal together with the draft Impact Assessment, however, is far from actually tackling the existing problems with outdated copyright protection. Furthermore, they seem like a cry out of threatened businesses to secure their place under the sun at the expense of others. Instead of dealing with the problems that are actually hampering the EU from achieving the desired digital single market, the proposed reform conveniently “backed up” with contradictory Impact Assessment, ignores the existing problems, disregards fundamental rights and leans towards reinforcing the same issues in a larger and more “harmonised” way.
Text and Data Mining – new field for DRM and beyond?
The purpose of ongoing copyright reform is inter alia addressing existing disparities between different member states and to bring more legal clarity on copyright into the digital sphere. FSFE also supported that reasoning and asked the EC to uphold these plans for uniform rules across the EU for the interpretation of exceptions and limitations. This inter alia is most likely achievable by introducing legislative requirements of mandatory exceptions, as this will not leave any space to manuoeuvre within the member states, allowing the necessary level of harmonisation across the EU.
In particular, we argued for an explicit right to extract data from lawfully accessed protected work. The Proposal includes a mandatory exception for uses of text and data mining technologies in the field of scientific research, illustration for teaching, and for preservation of cultural heritage. The draft Proposal does include a reference to the fact that licensing, including open licences do not address the question of text and data mining with the necessary clarity, as they often just do not have any reference to TDM.
The mandatory exception for TDM is therefore a welcomed approach. The downside with the exception as proposed by the EC is however the fact that it is only granted to ‘research organisations’ – university, a research institute or any other non-profit organisation of a public interest with the primary goal of conducting scientific research and to provide educational services. The scope of such mandatory exception is hence limited, excluding everyone else with lawful access to protected works (citizens, SMEs etc).
The exception of TDM directed to everyone who has lawfully accessed the work, according to the Commission, is unfavourable simply because “this option would have a strong negative effect on publisher’s TDM licensing market”. Ignoring the benefits to innovation, and the fact that such exception would open opportunities for more businesses, the EC is evidently in favour of securing existing situation of publishers’ taking advantage of current legal unclarity.
In addition, TDM exception has a reference to the right of rightholders to apply technical safeguards to their works, as according to the Draft Proposal, new TDM exceptions has a potential to inflict “high number of access requests to an downloads of their works”. This is a reference to so-called Digital Restrictions Management (DRM) that is widely used by the rightholders to arbitrarily restrict users’ freedom to access and use the lawfully acquired works and devices within the limits of copyright law. A slight hope to restrict this arbitrary practice at least in TDM exception is contained in the second part of the proposed provision which requires that “such measures shall not go beyond what is necessary” to achieve the objective to ensure security and integrity of the networks and databases where the works are hosted. In addition, these measures should not undermine the effective application of the exception.
Whilst it is definitely a better approach to address the lawfulness of DRM and include a safeguard for an effective application of the exception, it is however, a worrisome direction to see such requirement in a copyright regulation. It is evident that the rightholders would need to ensure the technical security and integrity of their databases and networks in any case, irrespective whether the users’ access their works under a legitimate TDM exception or any other use. This vague provision sounds like rightholders can receive a wide-reaching right in name of copyright to apply any technical measure that is “necessary” to safeguard their right. The provision lacks the requirement of proportionality in addition to necessity: when the measure is not only necessary to stop the unlawful access to the database but is also proportionate with the alleged aim and purpose of such measure.
Link Tax – EC says ‘no’ but means ‘yes’
The tensions between hyperlinks and copyright have been on agenda of European Court of Justice on several occasions: when can a link to copyrighted material constitute a copyright infringement and why? The reform of existing European copyright rules can be seen as an opportunity to bring some clarity to the question and secure the fundamental principle of internet: linking, i.e. as in mere referencing to or quoting *existing* content online shall be considered a lawful use of protected work per se.
However, the EC decided to go after hyperlinks from a different direction: in the name of *holy* news publishers who are losing their revenues because of online uses to “ensure the sustainability of the publishing industry”. In a nutshell, news publishers are granted with so-called “neighbouring rights” for the reproduction and and making available to the public of publications in respect of online uses. This means that news publishers get exclusive rights to prohibit any reproduction or “the communication to the public” of their stories: including the snippets of the text or hyperlinks. According to the EU case-law, the text as long as 11 words is considered to be “literary work” protected by copyright laws. Publishers enjoying such broad and widespread right without any counterbalance from the other side is a serious threat to the existing online environment and to the internet we know, not mentioning the implications of freedom of expression and the diversity of media.
According to the Impact Assessment, publishers are currently in the most disadvantageous situation, as they “rely on authors’ copyright that is transferred to them”. When did copyright become the right to maximise the revenues of struggling business models making their money off creativity of other people? Furthermore, the Impact Assessment acknowledges the fact that so-called “ancillary rights” for publishers, already introduced in Spain (i.e. compensation to publishers from online service providers) and Germany (i.e. exclusive right covering specifically the making available of press products to the public) , have not proven effective to address publishers’ problems so far, in particular as they have not resulted in increased revenues for publishers from the major online service providers. Yet, the EC is convinced that the best solution would be to just combine two failed solutions and impose it on the rest of the member states.
The leaked documents indicate the worrisome direction taken by the EC in order to bring the EU to digital single market. Unfortunately, the EC is disregarding everything that can help the EU to enhance its digital environment. Instead of acknowledging the change internet has brought to the use and distribution of copyrighted material, the EC is frantically trying to secure the interests of fading businesses and their revenues first, rather than authors.
Tuesday, 13 September 2016
English Planet – Dreierlei | 12:41, Tuesday, 13 September 2016
This post is for those who who are allowed to vote for the “Abgeordnetenhaus” next Sunday, September 18, but do not speak German and therefore have no clear idea about the different political parties and what they stand for. If you are interested in Internet policy issues, however, I translated the content of a press release by the “Koalition Freies Wissen” (means the coalition of free knowledge) for you, to help you choosing your favorite party.
The “Koalition Freies Wissen” is a coalition of Bündnis Freie Bildung, Digitale Gesellschaft e.V., Freifunk, Free Software Foundation Europe, Open Knowledge Foundation Deutschland and Wikimedia Deutschland. Together, they have been asking political parties that participate in the elections about current matters regarding Free Software, Open Data, Free Knowledge, digital education, access to and fundamental rights in the digital hemisphere. Responds have been given by
- Bündnis 90/Die Grünen (Greens)
- CDU (Conservatives)
- die Linke (left-wing)
- Piraten (Pirates)
- SPD (Social democrats / Labor party)
Out of these responds, here a short overview about their political preferences: (own translation and compilation based on the press release)
The answers given by the two parties currently in power – CDU and SPD – are unsatisfying. Both present themselves disproportionately pro surveillance. And both of them have a backlog regarding Free Software. The CDU even seem to be afraid of using Free Software. The CDU also sees no problem in breaking net neutrality by offering special data transfer agreements to those companies who pay more. Regarding the questions about public domain and publishing in Open Access, the CDU sees the only responsibility and possibilities in the hand of the original author – even if they are paid with public money. While the SPD at least sees the need to freely license the results of publicly funded science. Finally, the CDU is the only political party in Berlin that sees no need for a transparency law, while the SPD does at least a lip service here.
The current opposition – Grüne, Linke and Piraten – are much more open to Internet policy issues, still with some differences between them. First, all of them have a critical position towards surveillance, especially towards overflowing mass surveillance. Talking about Free Software, the Piraten missed to answer properly while the Grüne and Linke are very much in favor of an increased use of Free Software in public administration. The Grüne even like to stipulate that publicly financed software shall be published as Free Software. All parties of the current opposition are against zero-rating and pro net neutrality and they support the establishment of a transparency law as we have seen it recently in Hamburg. Talking about Public Domain and Open Access, Grüne, Linke and Piraten are in favor of publishing publicly funded content and results under free licenses with some very concrete proposals how to achieve this by the Linke and the Grüne.
After all, the Koalition Freies Wissen was happy to see that basically all political parties nowadays see the potential and importance of Open Educational Resources – although they differ in concrete goals and methods, with the most concrete proposals by the Grünen.
In sum: If you like your Internet and knowledge to be free, then do not vote for the ruling parties but choose one of the opposing parties – Grüne, Linke or Piraten.
And, if you are in favor of the opposition, please do vote! As Berlin is not just a city but also one of the 16 German federal states, this Abgeordnetenhaus is also a state parliament. Its competences, powers and budget allocations are much higher than the ones of a “usual city council”. In addition, Berlin is the capital of the Federal Republic of Germany. That means that local politics and policies in and for Berlin have some influence on the rest of the republic.
Monday, 12 September 2016
English Planet – Dreierlei | 06:55, Monday, 12 September 2016
Today, 10 years ago, I made my first substantial edits into Wikipedia about documentary photography (“Dokumentarfotografie”). It was the same day I started for the first time an article from scratch, about the large passion (“Die große Passion”) by Albrecht Dürer. Looking into the version history of these articles today is interesting from a biographical point of view but also from a technological one.
September 12, 2006, was one day before my final examination in my minor study subject Art & Media science. I decided to not spend the last day looking into my subject matter again but to write the knowledge that I just gathered into Wikipedia.
Looking today at these articles, there are mainly two things that fascinate me. First, the possibility inside Wikipedia to look back on every single version since then and therewith the possibility to reconstruct the history of each article. For example, I can see for my article about documentary photography, that it took four more years until the next user (KissmeKate) adds substantial information to the article.
And, funfact: It takes until the first of August 2015, which is more then 10 years after the initial paragraph about documentary photography, until the article actually shows a documentary photo (thanks to user Lotje).
The second thing that fascinates me is about the stability of such specific articles as the one I started about the large passion of Albrecht Dürer: Basically, the text has not changed since my first write-up 10 years ago, except of some typos that have been fixed.
But, there is one fundamental difference to the time back then: On April 17, 2012, user Fagairolles 34 adds digital scans of the printouts and therewith makes it easy for everyone in the world to look at the amazing art of Albrecht Dürer, printed on paper in his large passion. And this is very different than in 2006!
These pictures, visible in Wikipedia, make me feel how technology and access of knowledge still has changed fundamentally in the last 10 years. When I was studying for my exam, access to copies of printouts by Albrecht Dürer was only analogue (yes, in 2006). Some of them, so called facsimiles, have only been accessible in a specially secured room of our University’s library where no one was allowed to borrow or to copy them. And, in 2006 there have not been good pictures about Albrecht Dürer printouts in the Internet, neither.
Today, however, I can see the whole large passion in Wikipedia. More than that, I can even see an overwhelming collection of the beautiful art of Albrecht Dürer inside Wikimedia Commons. This way, Albrecht Dürer is for me one more example to see, how Wikipedia and their sister projects facilitate gathering and sharing of knowledge on a global scale. Keep it running.
Friday, 09 September 2016
English Planet – Dreierlei | 08:45, Friday, 09 September 2016
Last weekend, September 2nd to 4th, we organised the first ever FSFE summit to bring together our pan-European community and Fellows for a whole weekend. A conference as a gathering, with the potential to build bridges and band together.
The FSFE summit was part of the QtCon, an event where people from different communities – Qt, KDAB, KDE, VLC – and our friends and community from the FSFE came together under one roof to get in contact with each other, to share skills and knowledge. All of this in a welcoming environment that offered a lot of space for all of us.
Looking back, one of the greatest things to hear, multiple times, was about people who came for the FSFE summit and then went to a technical talk about Qt or KDE once in a while. And about Qt developers that came and said it is great to have the chance to hear a political talk and they were joining the FSFE summit from time to time. Mixing our different communities and sharing expertise rarely seemed so easy. Two of FSFE’s local heroes turned out to be KDE contributors, just like one of our current community representatives, Mirko Böhm. Many VLC contributors were joining the FSFE 15 years party and so many stories more that have to be told.
Our initial plan, to bring our communities together at the same event and under one roof, turned out to be happily accepted by the communities and visitors of QtCon. Thanks for everyone who made this event possible, the countless volunteers and the participants. Seeing all of you bringing this event to life was fantastic.
For those who could not have been with us, be patient: We like to set up a page to bring together the records of the talks, slides and pictures. But before this is done, I can already highlight the record of our keynote speaker Julia Reda, Member of the European Parliament, who explained how “Proprietary Software threatens Democracy”.
And there is so much more to come. I am especially looking forward to the records of our community tracks. I have seen a lot of participation in “live” and I am sure there is more interest out there by people who could not make it to the FSFE summit.
Your help needed
If you have been to the FSFE summit, please help me in evaluating and in archiving the summit for the future. You can do so by sharing links, pictures, videos or by leaving your feedback. The links and pictures will be used to set up a page to share the summit with those who have not been here and for yourself as an archive. Your feedback will be used for our evaluation and as input if we are going to plan a second edition of the summit.
Please upload your pictures/videos here:
Be aware, that we like to publish some of them on our page under CC-BY 4.0 or CC-BY-SA 4.0. If you have a preference please let us know. Also, when you upload your pictures, do not forget to let me know who the author is! The easiest way to do so, would be to put your pictures in one container with your name as container-name.
And please also give us your feedback! You can either send an email to email@example.com or if you prefer “anonymous”, you can write/copy your feedback into this etherpad:
I know, that writing into a pad is not really anonymous but a good step into it, compared by sending it via email.
Last but not least, if you have any links to share about the summit – from your personal blog, from your favorite news magazine, from your social media buddy … – please help us to collect them in this pad:
Finally, thanks to the QtCon and its organisers to host the first FSFE summit. It was a pleasure working with you!
And as long as we do not yet have the FSFE summit page to look back, find some of my pictures below to get a first impression:
These pictures are licensed CC0 (Public Domain) by Erik Albers
Tuesday, 06 September 2016
Elena ``of Valhalla'' | 18:46, Tuesday, 06 September 2016Candy from Strangers
A few days ago I gave a talk at ESC https://www.endsummercamp.org/ about some reasons why I think that using software and especially libraries from the packages of a community managed distribution is important and much better than alternatives such as pypi, nmp etc. This article is a translation of what I planned to say before forgetting bits of it and luckily adding it back as an answer to a question :)
When I was young, my parents taught me not to accept candy from strangers, unless they were present and approved of it, because there was a small risk of very bad things happening. It was of course a simplistic rule, but it had to be easy enough to follow for somebody who wasn't proficient (yet) in the subtleties of social interactions.
One of the reasons why it worked well was that following it wasn't a big burden: at home candy was plenty and actual offers were rare: I only remember missing one piece of candy because of it, and while it may have been a great one, the ones I could have at home were also good.
Contrary to candy, offers of gratis software from random strangers are quite common: from suspicious looking websites to legit and professional looking ones, to platforms that are explicitly designed to allow developers to publish their own software with little or no checks.
Just like candy, there is also a source of trusted software in the Linux distributions, especially those lead by a community: I mention mostly Debian because it's the one I know best, but the same principles apply to Fedora and, to some measure, to most of the other distributions. Like good parents, distributions can be wrong, and they do leave room for older children (and proficient users) to make their own choices, but still provide a safe default.
Among the unsafe sources there are many different cases and while they do share some of the risks, they have different targets with different issues; for brevity the scope of this article is limited to the ones that mostly concern software developers: language specific package managers and software distribution platforms like PyPi, npm and rubygems etc.
These platforms are extremely convenient both for the writers of libraries, who are enabled to publish their work with minor hassles, and for the people who use such libraries, because they provide an easy way to install and use an huge amount of code. They are of course also an excellent place for distributions to find new libraries to package and distribute, and this I agree is a good thing.
What I however believe is that getting code from such sources and using it without carefully checking it is even more risky than accepting candy from a random stranger on the street in an unfamiliar neighbourhood.
The risk aren't trivial: while you probably won't be taken as an hostage for ransom, your data could be, or your devices and the ones who run your programs could be used in some criminal act causing at least some monetary damage both to yourself and to society at large.
If you're writing code that should be maintained in time there are also other risks even when no malice is involved, because each package on these platform has a different policy with regards to updates, their backwards compatibility and what can be expected in case an old version is found to have security issues.
The very fact that everybody can publish anything on such platforms is both their biggest strength and their main source of vulnerability: while most of the people who publish their libraries do so with good intentions, attacks have been described and publicly tested, such as the fun typo-squatting http://incolumitas.com/2016/06/08/typosquatting-package-managers/ one (archived URL http://incolumitas.com/2016/06/08/typosquatting-package-managers/" target="_blank">http://web.archive.org/web/20160801161807/http://incolumitas.com/2016/06/08/typosquatting-package-managers/) that published harmless malicious code under common typos for famous libraries.
Contrast this with Debian, where everybody can contribute, but before they are allowed full unsupervised access to the archive they have to establish a relationship with the rest of the community, which includes meeting other developers in real life, at the very least to get their gpg keys signed.
This doesn't prevent malicious people from introducing software, but raises significantly the effort required to do so, and once caught people can usually be much more effectively prevented from repeating it than a simple ban on an online-only account can do.
It is true that not every Debian maintainer actually does a full code review of everything that they allow in the archive, and in some cases it would be unreasonable to expect it, but in most cases they are at least reasonably familiar with the code to do at least bug triage, and most importantly they are in an excellent position to establish a relationship of mutual trust with the upstream authors.
Additionally, package maintainers don't work in isolation: a growing number of packages are being maintained by a team of people, and most importantly there are aspects that involve potentially the whole community, from the fact that new packages that enter the distribution are publicity announced on a mailing list to the various distribution-wide QA efforts.
Going back to the language specific distribution platforms, sometimes even the people who manage the platform themselves can't be fully trusted to do the right thing: I believe everybody in the field remembers the npm fiasco https://lwn.net/Articles/681410/ where a lawyer letter requesting the removal of a package started a series of events that resulted in potentially breaking a huge amount of automated build systems.
Here some of the problems were caused by some technical policies that caused the whole ecosystem to be especially vulnerable, but one big issue was the fact that the managers of the npm platform are a private entity with no oversight from the user community.
Here not all distributions are equal, but contrast this with Debian, where the distribution is managed by a community that is based on a social contract https://www.debian.org/social_contract and is governed via democratic procedures established in its https://www.debian.org/devel/constitution.
Additionally, the long history of the distribution model means that many issues have already been met, the errors have already been done, and there are established technical procedures to deal with them in a better way.
So, shouldn't we use language specific distribution platforms at all? No! As developers we aren't children, we are adults who have the skills to distinguish between safe and unsafe libraries just as well as the average distribution maintainer can do. What I believe we should do is stop treating them as a safe source that can be used blindly and reserve that status to actual trustful sources like Debian, falling back to the language specific platforms only when strictly needed, and in that case:
actually check carefully what we are using, both by reading the code and by analysing the development and community practices of the authors;
if possible, share that work by becoming ourselves maintainers of that library in our favourite distribution, to prevent duplication of effort and to give back to the community whose work we get advantage from.
Friday, 02 September 2016
free software - Bits of Freedom | 10:29, Friday, 02 September 2016
If you're interested in applying for an internship in the FSFE, now is not the time to apply! :-) We've just introduced some changes to our internship program, to clarify the program and ensure it remains as an attractive program for students and young professionals wanting to learn more about free software, especially when it comes to policy, legal issues, and public awareness work.
About a year ago, we introduced some changes to our internship program which split the program in two parts: internships (for students) and traineeships (for everyone else). What we've seen in this period is that the applications for student internships has dropped significantly: most people apply for traineeships, and it's been confusing for applicants (and us!) to have two separate programs.
The changes we've now introduced bring these two programs together under one program again: internships. However, this new program will be open for students and others alike. Occasionally we may decide to announce internship openings specifically to students who are doing this as part of their education, but in either case, it will all be under the same program.
Another change we do is we will no longer (generally) accept unsolicited applications. They take a lot of time for us to process and in 99% of the cases, we can not successfully place the applicant. Rather, we will be announcing, several times per year, specific internship openings with fixed deadlines to apply.
Each internship we announce will include more explicit information about which work area of the FSFE this internship will touch upon, and what background we would like to see in a successful applicant.
Planet Fsfe on Iain R. Learmonth | 10:20, Friday, 02 September 2016
Me and Ana travelled to Cambridge last weekend for the Debian UK BBQ. We travelled by train and it was a rather scenic journey. In the past, on long journeys, I’ve used APRS-IS to beacon my location and plot my route but I have recently obtained the GPS module for my Yaesu VX-8DE and I thought I’d give some real RF APRS a go this time.
While the APRS IGate coverage in the UK is a little disappointing, as is evidenced by the map, a few cool things did happen. I recieved a simplex APRS message from a radio amateur 2M0RRT with the text “test test IO86ML” (but unfortunately didn’t notice until we’d long passed by, sorry for not replying!) and quite a few of my packets, sent from a 5 watt handheld in Cambridge, were heard by the station M0BPQ-1 in North London (digipeated by MB7UM).<figure> <figcaption>
My APRS Position Reports for the Debian UK BBQ 2016</figcaption> </figure>
Some of you will know that since my trip to the IETF in Berlin, I’ve been without a working laptop (it got a bit smashed up on the plane). This also caused me to miss the reminder to renew the expiry on my old GPG key, which I have now retired. My new GPG key is not yet in the Debian keyring but can be found in the Tor Project keyring already. A request has already been filed to replace the key in the Debian keyring, and thanks to attendees at the BBQ, I have some nice shiny signatures on my new key. (I’ll get to returning those signatures as soon as I can.)
We’ve been making a lot of progress with Debian Live and the BBQ saw live- tasks being uploaded to the archive. This source package builds a number of metapackages, each of which configures a system to be used an a live CD for a different desktop environment. I would like for as much of the configuration that can be done within the image as possible to be done within the image, as this will help with reproducibility. A new upload for live-wrapper should be coming next week, and this version will allow these live-task-* packages to be used to build images for testing. I hope to have weekly builds for the live images running very soon.
As I’ve been without a computer for a while, I’m just getting back into things now, so expect that I’ll be slow to respond to communication for a while but I’ll also be making commits and uploads and trying to clear this backlog as quickly as I can (including my Tor Project backlog).
DanielPocock.com - fsfe | 08:46, Friday, 02 September 2016
The FSFE Summit and QtCon 2016 are getting under way at bcc, Berlin. The event comprises a range of communities, including KDE and VideoLAN and there are also a wide range of people present who are active in other projects, including Debian, Mozilla, GSoC and many more.
Today, some time between 17:30 and 18:30 I'll be giving a lightning talk about Postbooks, a Qt and PostgreSQL based free software solution for accounting and ERP. For more details about how free, open source software can make your life easier by helping keep track of your money, see my comparison of free, open source accounting software.
Saturday, at 15:00 I'll give a talk about Free Communications with Free Software. We'll look at some exciting new developments in this area and once again, contemplate the question can we hope to use completely free and private software to communicate with our friends and families this Christmas? (apologies to those who don't celebrate Christmas, the security of your communications is just as important too).
A note about the entrance fee...
There is an entry fee for the QtCon event, however, people attending the FSFE Summit are invited to attend by making a donation. Contact FSFE for more details and consider joining the FSFE Fellowship.
Thursday, 01 September 2016
English — mina86.com | 22:07, Thursday, 01 September 2016
Python! My old nemesis, we meet again. Actually, we meet all the time, but despite that there are always things which I cannot quite remember how to do and need to look them up. To help with the searching, here there are collected in one post:
- Converting a date into a timestamp
- Re-rising Python exception preserving back-trace
- Flattening a list in Python
Converting a date into a timestamp
datetime.datetime has a method turning it into
a timestamps, i.e. seconds since Unix epoch. Programmers might be
tempted to use
time.mktime but that may result in a disaster:
>>> import datetime, time >>> now, ts = datetime.datetime.utcnow(), time.time() >>> ts - int(now.strftime('%s')) 7200.1702790260315 >>> ts - time.mktime(now.timetuple()) 7200.1702790260315
In both cases, expected value is around zero (since it measures
time calls) but in
practice it is close to the value of local timezone offset (UTC+2 in
the example). Both methods take local timezone into account which
is why the offset is added.
As Olive Teepee pointed out, Python 3.3 addresses this with
datetime.timestamp method but if one is stuck with
older releases the proper, alas somewhat more convoluted, solution
is to use
import calendar, datetime, time now, ts = datetime.datetime.utcnow(), time.time() print ts - calendar.timegm(now.timetuple()) # prints: 0.308976888657
Re-rising Python exception preserving back-trace
import sys exc_info =  def fail(): assert False def run(): try: fail() except: exc_info[:] = sys.exc_info() def throw(): raise exc_info, exc_info, exc_info def call_throw(): throw() if not run(): call_throw()
throw rises the exception again, back-trace will
contain all frames that led up to having the exception caught
$ python exc.py Traceback (most recent call last): File "exc.py", line 15, in <module> if not run(): call_throw() File "exc.py", line 13, in call_throw def call_throw(): throw() File "exc.py", line 8, in run try: fail() File "exc.py", line 5, in fail def fail(): assert False AssertionError
This is a bit like bare
clause but performing the re-rising at arbitrary later time.
Flattening a list in Python
To turn a list of lists (or in more generic terms, an iterable of iterables) into a single iterable use one of:
def flatten(sequences): return itertools.chain.from_iterable(sequences) def flatten(sequences): return (x for lst in sequences for x in lst)
(If you feel confused about nested comprehension don’t feel bad — it’s syntax is broken. The thing to remember is that you write a normal nested for-if-for-if-… sequence but then put the final statement at the beginning of the line instead of at the end).
If all elements are known to be lists or tuples,
sum may be considered easier:
def flatten(lists): return sum(lists, ) def flatten(tuples): return sum(tuples, ())
David Boddie - Updates (Full Articles) | 21:23, Thursday, 01 September 2016
Continuing my exploration of alternative operating systems, which was inspired by the successfully funded EOMA68 Crowd Supply campaign, I have been looking at Inferno in more detail than I've managed to do so before. My aim has been to install the operating system on real hardware rather than relying on emulators since the eventual goal is to install it and try it out on my old laptop.
What is Inferno?
Inferno is a operating system in the Unix lineage that gets less attention than its predecessor, Plan 9 (site unavailable at the time of writing), and has never really enjoyed the success of other systems of a similar vintage. It's difficult to know if this is due to the initial choice of license, the architectural and conceptual choices, the current license, or perhaps it just isn't regarded as "cool" software.
Both Plan 9 and Inferno started as proprietary operating systems with license models that were common for the time, involving different licenses for different uses, the need to pay up front for the system, and limited rights for the users. By the time the third edition of Inferno was released, a license subscription would cost $300. The fourth edition, however, was released under the GNU General Public License (version 2), making it an interesting candidate for exploration.
As an aside, Plan 9 was available under the GPL-incompatible Lucent Public License until being re-released under the GNU GPL v2. This followed a previous license change that appeared to cause some issues for the OpenBSD community at the time.
In technical terms, Inferno differs from Plan 9 in one obvious way: software for the operating system is written in Limbo and compiled to bytecode for the Dis virtual machine rather than executed as native code. However, the virtual machine does have a Just In Time (JIT) compiler, so performance may be better than some other virtual machines. Still, if you want the power and flexibility of being able to compile and run native code within the OS, perhaps Inferno isn't for you.
Learning about Inferno
Rather than write notes about installation here, I forked/cloned the official Mercurial repository for Inferno on Bitbucket and enabled a Wiki for the repository. The Wiki is not a clone of the official repository's Wiki since that doesn't really contain much content. My plan is to write about installing and using "native" Inferno, intended for use on real hardware, rather than "hosted" Inferno which is run as an environment on another operating system.
There are plenty of resources already available about Inferno, of course, and I don't want to duplicate what others have already written. The documentation is a good starting point.
Another document worth looking at is Mechiel Lukkien's Getting Started with Inferno which summarises the key concepts behind the operating system, gives an overview of the root directory layout, describes installation of a hosted environment, and covers a few other topics related to Inferno. It also includes links to other online resources. Those familiar with GNU/Hurd's concept of translators may find the examples of Styx servers familiar.
It seems that people discover Inferno, perhaps via Plan 9, and go on write a few articles about it.
- Amongst other things, Chris Double has written a series of articles about using hosted Inferno on Linux and Android, sharing resources between the host and hosted systems, and bundling Inferno applications for standalone deployment on Linux.
- Alex Efros (Powerman) wrote a collection of articles in English and Russian covering aspects of Inferno and Limbo.
- Pete Elmore wrote a series of articles describing his experiences with Inferno, culminating in a public VNC server exporting the Inferno environment for casual users to play with.
- The Inferno Programmer's Notebook is a collection of articles about using Inferno and Limbo.
Finally, a lot of historical information about Inferno is held at Cat-v.org.
Category: Free Software
/127.0.0.? /var/log/fsfe/flx » planet-en Albrechts Blog Alessandro at FSFE » English Alessandro's blog Alina Mierlus - Building the Freedom » English André on Free Software » English Being Fellow #952 of FSFE » English Bela's Internship Blog Bernhard's Blog Bits from the Basement Blog of Martin Husovec Blog » English Blog – Think. Innovation. Bobulate Brian Gough's Notes Carlo Piana :: Law is Freedom :: Ciarán's free software notes Colors of Noise - Entries tagged planetfsfe Communicating freely Computer Floss Daniel Martí's blog DanielPocock.com - fsfe David Boddie - Updates (Full Articles) Don't Panic » English Planet ENOWITTYNAME Elena ``of Valhalla'' English Planet – Dreierlei English – Björn Schießle's Weblog English – Max's weblog English — mina86.com Escape to freedom FLOSS – Creative Destruction & Me FSFE Fellowship Vienna » English FSFE interviews its Fellows Fellowship News Florian Snows Blog » en Frederik Gladhorn (fregl) » FSFE Free Software & Digital Rights Noosphere Free Software with a Female touch Free Software – Free Software – GLOG Free Software – hesa's Weblog Free as LIBRE Free speech is better than free beer » English Free, Easy and Others From Out There Graeme's notes » Page not found Green Eggs and Ham Handhelds, Linux and Heroes Heiki "Repentinus" Ojasild » English HennR's FSFE blog Henri Bergius Hook’s Humble Homepage Hugo - FSFE planet Inductive Bias Jelle Hermsen » English Jens Lechtenbörger » English Karsten on Free Software Losca Marcus's Blog Mario Fux Mark P. Lindhout’s Flamepit Martin's notes - English Matej's blog » FSFE Matthias Kirschner's Web log - fsfe Myriam's blog Mäh? Nice blog Nico Rikken » fsfe Nicolas Jean's FSFE blog » English Norbert Tretkowski PB's blog » en Paul Boddie's Free Software-related blog » English Planet Fsfe on Iain R. Learmonth Pressreview Rekado Riccardo (ruphy) Iaconelli - blog Saint's Log Seravo Tarin Gamberini Technology – Intuitionistically Uncertain The Girl Who Wasn't There » English The trunk Thib's Fellowship Blog » fsfe Thinking out loud » English Thomas Koch - free software Thomas Løcke Being Incoherent Told to blog - Entries tagged fsfe Tonnerre Lombard Torsten's FSFE blog » english Viktor's notes » English Vitaly Repin. Software engineer's blog Weblog Weblog Weblog Weblog Weblog Weblog Werner's own blurbs With/in the FSFE » English a fellowship ahead agger's Free Software blog anna.morris's blog ayers's blog bb's blog blog drdanzs blog » freesoftware egnun's blog » FreeSoftware emergency exit free software - Bits of Freedom free software blog freedom bits gollo's blog » English julia.e.klein's blog marc0s on Free Software mkesper's blog » English nikos.roussos - opensource pichel's blog polina's blog rieper|blog » en softmetz' anglophone Free Software blog stargrave's blog the_unconventional's blog » English things i made tobias_platen's blog tolld's blog wkossen's blog yahuxo's blog