Planet Fellowship (en)
Tuesday, 25 April 2017
DanielPocock.com - fsfe | 12:57, Tuesday, 25 April 2017
The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.
I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.
Please consider becoming an FSFE fellow or donor
The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.
Attending OSCAL'17, Tirana
During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.
What is your view on the Fellowship and FSFE structure?
Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.
In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.
free software - Bits of Freedom | 07:51, Tuesday, 25 April 2017
At the last Open Source Leadership Summit in February, I had the chance to develop some of my thoughts around OpenChain™ and its applicability for the free and open source software community at large. To recap, OpenChain is a Linux Foundation project currently headed up by Shane Coughlan (who used to work with the FSFE and is still part of our legal team). It's vision is a software supply chain for free and open source software which is trusted and with consistent license information. The OpenChain Conformance Specification1 is a "set of requirements and best practice for open source organizations to follow in an attempt to encourage an ecosystem of open source software compliance."
One way to look at this is as a checklist of things organisations must do or make available in order to have a reasonable chance of living up to its compliance commitments in free software. This includes educating software staff about free software, having a process to track free software used so the corresponding source code can be made available, and that offers to do so are included with software (where required). All in all, these are some reasonable expectations of what an organisation is expected to do. It doesn't guarantee compliance, but it encourages it.
However, as the saying goes, no chain is stronger than its weakest link. The vision of OpenChain speaks of the software supply chain, and the supply chain starts with a free and open source software project. Like the Linux kernel, or the GNU C Library. If we expect that those using free and open source software projects comply with the license terms, we must consider the project as part of the supply chain, and find ways in which we can make it as a trusted part of the chain.
When I was young, I worked summers at Ericsson Telecom AB as a tester for passive components. The way it worked is Ericsson would buy components for its products, and when the components arrived, a few of each batch would be sent for testing where the components were meticulously tested to make sure they conformed with the specification.
After my second year there, we started getting less components to test. During my third year, most of the tests had been replaced and the components went straight into storage. What replaced the tests? Trust. By working with their suppliers, Ericsson set up systems to encourage the suppliers to take a larger responsibility for the components it shipped. As the trust increased between Ericsson and its suppliers, and as suppliers took more responsibility for what they shipped, the need for Ericsson to do the testing on their side reduced drastically.
The free and open source sector right now is in a state where validation and due diligence of software licenses must be done at almost every link in the chain. OpenChain is one of the ways in which the trust can be increased, but we need to move this back to the project level and encourage reasonable compliance practices for every major free and open source software project.
We can actually look at OpenChain as providing some of the answers to what those practices would be. We need to think more broadly, but a lot of the practices mandated by OpenChain for organisations make a lot of sense for projects as well.
Have a documented FOSS policy
Every project should to have the license choice of the project documented, both as it relates to the actual source code and any ancillary parts (where the license choice may be different). The rationale for a certain license may be included, as well as information about how future license changes would be considered by the project.
This policy should also consider dependencies against external projects and their licenses.
Make FOSS training available
Developers, even in volunteer projects, need to have some basic knowledge about free software compliance in order to make informed decisions. This should be included as part of any request to sign copyright assignments and developers should be encouraged to occasionally review the project's recommended training materials about compliance.
Identify a FOSS liaison
There should be a publicly communicated e-mail address or similar contact point to which people can submit questions or concerns regarding compliance. In its simplest form, this can easily be an email address which creates ticket in a ticket system available for core developers (or another identified group). As legal information may be submitted through this, it should generally not be immediately publicly accessible though.
Arrange an external source of legal expertise
If we have a FOSS liaison function for the project which is composed of the core developers or another identified group, we need to resolve the question of escalation: if a compliance question can not be responded to, where does it escalate?
Every project should have identified an external source of legal expertise which function both as a source of expertise as needed and also as an escalation path to ensure matters reported are dealt with accurately.
Take care when distributing
To the extent the license used require it (as is usually the case when distributing binaries, which some projects may optionally do in addition to its source code), the project also need to consider the obligations placed on it when making releases.
The policy earlier described should ideally cover these cases too, but the actual work and what needs to be done can include:
- Maintain archival copies of dependencies' source code to be able to satisfy requests for the complete corresponding source code,
- Information about the build process,
- Preparing and including SPDX metadata,
- Including relevant credits and license information,
- Including an offer for source code.
If you look at the current OpenChain Conformance Specification, some of the practices above are reflected in that specification. This makes sense: if free and open source software projects follow similar practices and processes as the organisations using the software, we increase the likelihood of a consistent, compliant, supply chain.
Friday, 21 April 2017
free software - Bits of Freedom | 06:59, Friday, 21 April 2017
Some years ago by my reckoning, we passed peak GNU. It went by quicker than some of us noticed, but it was anticipated. By peak GNU, I refer to the impact of the GNU project on free and open source software, both in terms of technology and in terms of license choices.
The GNU project, more than any, have pioneered the idea of copyleft licenses through the GNU GPL. It has also been a strong advocate for quality. When people were hired to work for the GNU project in the mid 1980s, the code was scrutinized in excruciating detail for conformance to the coding standards.
When I worked as a webmaster for the GNU project in the end of the 1990s, I experienced the same attention to detail when it came to writing. In my interviews with former staff for the GNU project, one developer noted how he had never, ever, neither before nor after, in all his working career in the industry, seen the same obsession with quality as that which he saw when he joined the GNU project.
But no program lasts forever. Several of the packages which I would consider make up the core of the GNU project increasingly find themselves sidelined: Emacs doesn't have the same appeal as it once did, bash is losing its status as a default shell, gcc has increased competition and the entire tool chain looks different in many projects. What was once GNU, isn't new any more.
While the evaluation guidelines of the GNU project talk about the need for programs adopted by the GNU project to work together, and how GNU is not just a collection of useful programs, I believe the reality is different. Looking at the collection of GNU programs, there are a part of them which are clearly developer focused and serve the GNU tool chain (automake, gcc, etc).
Another part are general programs which you would assume to be part of a normal operating distribution, which the GNU project has always sought to build: dia, gnome, gpaint, and so on.
Then there are the outliers: gneuralnetwork, melting, health, and so on. All worthy programs in their own right, but neither developer tools nor something you expect as part of a standard operating system distribution.
The GNU project has a lot of potential, and a lot of good will. There's a need for projects driving copyleft adoption and showing how copyleft licensing can be a positive force for free and open source software. But there seems to be some ambivalence in how the GNU project presents itself, what programs it accepts as part of the GNU family, and how it is organised.
If nothing else, perhaps a clearer differentiation between the GNU tool chain, the GNU operating system, and additional GNU tools, would be in order.
What would be your thoughts?
Thursday, 20 April 2017
TSDgeos' blog | 21:08, Thursday, 20 April 2017Today KDE Applications 17.04 was released.
It includes Okular 1.1, it contains a nice set of features:
* Add annotation resize functionality
* Allow to rotate the page view using two-finger pinches on a touchscreen
* Change pages in presentation mode by swiping on touch screen
* Added support for Links that change the Optional Content visibility status
* Allow to disable automatic search while typing
* Allow to create bookmarks from the Table Of Contents
This release was brought to you by Albert Astals Cid, Oliver Sander, Luigi Toscano, Martin T. H. Sandsmark, Tobias Deiminger, Antonio Rojas, Burkhard Lück, Christoph Feck, Elvis Angelaccio, Gilbert Assaf, Heiko Becker, Hrvoje Senjan, Marco Scarpetta, Miklós Máté, Pino Toscano, Yuri Chornoivan.
Blog – Think. Innovation. | 15:16, Thursday, 20 April 2017
In the past months I have started to explore and getting myself involved in Humanitarian Aid organizations (NGOs), from an innovation perspective.
What I find striking is how many times I read and hear about the goal to “End Poverty”. Besides the problematic term ‘poor’, which can mean to have no money for a prolonged time, but can also mean ‘of low quality’ (low quality people? Do you see the problem here?), I believe thinking about it in this way already limits the solution space tremendously. For one, money is just a proxy for something else, nobody dies of a lack of money, people die because of a lack of food, shelter, safety, healthcare, etcetera. So focusing on solving that money problem, discards opportunities to find solutions in a different way.
And second, the big contemporary global problems are not a problem of scarcity, but they are an organizational problem. We do have enough food and stuff, but we organized in such a way that some people have plenty, more than they can ever consume, and other people have almost nothing. And the way our money and our financial system works is one of the fundamental reasons why we organized things in such a poor (=of low quality) way. So by getting into the box of ‘Solving Poverty’, you are not aware of the box anymore, oblivious to the system you are in that created these problems in the first place.
Related to this observation, yesterday I watched this TED video by Peter Singer on ‘Effective Altruism’. I really like this concept, looking at altruism in a more ‘business’ (as he calls it) way, going for proven effectiveness and transparency. However, I also find his logic and argumentation flawed in several ways. He argues by example that a person who wants to ‘do good’ should pursue a career as an investment banker, if he/she has the talent and ambition for that.
This person will earn a lot of money and he/she can than donate a portion of that to effective charities, being much more effective than going to work for the charity directly. The problem is this: as our current financial system is one of the main reasons of the poor (=low quality) condition of so many people, who is to say that this person did not more harm than good? Maybe (s)he helped weapons companies, contaminating fossil fuel or chemical corporations or otherwise non-beneficial companies become more successful and hence harmful.
Maybe it would have been better that this person would not pursue that career and not donate anything to charity, but just sit on the couch. This is by the way not a hypothetical problem, but an actual problem that large funds like the Bill and Melinda Gates Foundation have. This topic has been researched and these funds are aware of this investment paradox.
So to sum it up, we need people to think out of the box, to explore and experiment with radically new ways of organizing things, to create systemic change. Bart Romijn, director of Partos, wrote a really interesting column recently on a topic that relates to my point. He makes the observation that the time that people just donate money and NGOs do the rest is gone: NGOs should involve and co-create with their stakeholders, an observation I also made several years ago in one of my posts. I call it Volunteering 3.0.
In the column Bart links to an article of HBR, a long read, but a really interesting elaboration on what the authors call ‘old power’ and ‘new power’, relating the two best known but contrasting forms of techno-socio-utopism. I find it striking how the authors of the HBR piece and seemingly also Bart Romijn are oblivious to the way ‘old power’ has in fact hijacked the ‘new power’ language, calling the new concepts is own.
Companies like Airbnb, Uber, Lyft and Google are cheered upon as being ‘new power’ initiatieves, being the poster childs of peer-to-peer, collaboration and open source. It amazes me how the authors do not realize that these molochs are just as ‘old power’ as Shell and Monsanto, while the true ‘new power’ initiatives which by definition are not big multinational world-famous corporations, are pretty much ignored.
Or am I seriously mistaken here and did I get things mixed up? If I begin to think that everybody is crazy, there can be only one conclusion…
Image from StockSnap.io
Henri Bergius | 00:00, Thursday, 20 April 2017
As mentioned in my Working on Android post, I’ve been using a mechanical keyboard for a couple of years now. Now that I work on Flowhub from home, it was a good time to re-evaluate the whole work setup. As far as regular keyboards go, the MiniLa was nice, but I wanted something more compact and ergonomic.
The Atreus keyboard
The Atreus is a small mechanical keyboard that is based around the shape of the human hand. It combines the comfort of a split ergonomic keyboard with the crisp key action of mechanical switches, all while fitting into a tiny profile.
My use case was also quite travel-oriented. I wanted a small keyboard that would enable me to work with it also on the road. There are many other small-ish DIY keyboard designs like Planck and Gherkin available, but Atreus had the advantage of better ergonomics. I really liked the design of the Ergodox keyboard, and Atreus essentially is that made mobile:
I found the split halves and relatively large size (which are fantastic for stationary use at a desk) make me reluctant to use it on the lap, at a coffee shop, or on the couch, so that’s the primary use case I’ve targeted with the Atreus. It still has most of the other characteristics that make the Ergodox stand out, like mechanical Cherry switches, staggered columns instead of rows, heavy usage of the thumbs, and a hackable microcontroller with flexible firmware, but it’s dramatically smaller and lighter
I had the opportunity to try a kit-built Atreus in the Berlin Mechanical Keyboard meetup, and it felt nice. It was time to start the project.
Sourcing the parts
When building an Atreus the first decision is whether to go with the kit or hand-wire it yourself. Building from a kit is certainly easier, but since I’m a member of a hackerspace, doing a hand-wired build seemed like the way to go.
To build a custom keyboard, you need:
- Switches: in my case 37 Cherry MX blues and 5 Cherry MX blacks
- Diodes: one 1N4148 per switch
- Microcontroller: a Arduino Pro Micro on my keyboard
- Keycaps: started with recycled ones and later upgraded to DSA blanks
- Case: got a set of laset-cut steel plates
Even though Cherry — the maker of the most common mechanical key switches — is a German company, it is quite difficult to get switches in retail here. Luckily a fellow hackerspace member had just dismantled some old mechanical keyboards, and so I was able to get the switches I needed via barter.
The Cherry MX blues are tactile clicky switches that feel super-nice to type on, but are quite loud. For modifiers I went with Cherry MX blacks that are linear. This way there is quite a clear difference in feel between keys you typically hold down compared to the ones you just press.
The diodes and the microcontroller I ordered from Amazon for about 20€ total.
At first I used a set of old keycaps that I got with the switches, but once the keyboard was up and running I upgraded to a very nice set of blank DSA-profile keycaps that I ordered from AliExpress for 30€. That set came with enough keycaps that I’ll have myself covered if I ever build a second Atreus.
All put together, I think the parts ended up costing me around 100€ total.
When I received all the parts, there were some preparation steps to be made. Since the key switches were 2nd hand, I had to start by dismantling them and removing old diodes that had been left inside some of them.
The keycaps I had gotten with the switches were super grimy, and so I ended up sending them to the washing machine. After that you could see that they were not new, but at least they were clean.
With the steel mounting plate there had been a slight misunderstading, and the plates I received were a few millimeters thicker than needed, so the switches wouldn’t “click” in place. While this could’ve been worked around with hot glue, we ended up filing the mounting holes down to the right thickness.
Wiring the keyboard
Once the mounting plate was in the right shape, I clicked the switches in and it was time to solder.
Hand-wiring keyboards is not that tricky. You have to attach a diode to each keyswitch, and then connect each row together via the diodes.
The two thumb keys are wired to be on the same column, but different rows.
Then each column is connected together via the other pin on the switches.
This is how the matrix looks like:
After these are done, connect a wire from each column, and each row to a I/O pin on the microcontroller.
If you haven’t done it earlier, this is a good stage to test all connections with a multimeter!
Don’t mind the key labels in the picture above. These are the second-hand keycaps I started with. Since then I’ve switched to blank ones.
The default Atreus design has the USB cable connected directly to the microcontroller, meaning that you’ll have to open the case to change the cable. To mitigate that I wanted to add a USB breakout board to the project, and this being 2017, it felt right to go with USB-C.
I found some cheap USB-C breakout boards from AliExpress. Once they arrived, it was time to figure out how the spec works. Since USB-C is quite new, there are very few resources available on how to use it with microcontrollers. These tutorials were quite helpful:
Here is how we ended up wiring the breakout board. After these you only have four wires to connect to the microcontroller: ground, power, and the positive and negative data pins.
This Atreus build log was useful for figuring out where to connect the USB wires on the Pro Micro. Once all was done, I had a custom, USB-C keyboard!
Now I have the Atreus working nicely on my new standing desk. Learning Colemak is a bit painful, but the keyboard itself feels super nice!
However, I’d still like to CNC mill a proper wooden case for the keyboard. I may update this post once that happens.
I’m also considering to order an Atreus kit so I’d have a second, always packed for travel keyboard. The kit comes with a PCB, which might work better at airport security checks than the hand-wired build.
Another thing that is quite tempting is to make a custom firmware with MicroFlo. I have no complaints on how QMK works, but it’d be super-cool to use our visual programming tool to tweak the keyboard live.
Big thanks to Technomancy for the Atreus design, and to XenGi for all the help during the build!
Saturday, 15 April 2017
Jens Lechtenbörger » English | 14:26, Saturday, 15 April 2017
The EFF is “collecting stories from people about the moment digital privacy first started mattering in their lives”. They also ask to share the stories on Twitter using the hashtag
#privacystory. Here is my story, even if I’m not on Twitter. Feel free to share anyways, as usual under CC BY-SA 4.0.
I insist that my wife and I have got a right to privacy for our communication. Not only do I defend that right when we are in the same room, but also when we communicate over the Internet. Actually, I even dare to suggest that all people have got that right. (This point of view probably sounds radical to some; once upon a time such thoughts were worthy enough to be embedded into the Universal Declaration of Human Rights—see Article 12 there.)
For a long time I took that right for granted and believed that we in Germany and Europe did not need to fight for it. That belief was shattered in 2006 when the European parliament passed the Data Retention Directive: Throughout Europe, so-called metadata about the electronic communication (e-mails, phone calls, texts) of every law-abiding citizen was to be stored for 6 months, without any probable cause.
In response, I became interested in cryptography and anonymity networks to defend my human rights against unconstitutional violations. I have been using and advertising tools for digital self-defense since then. At the Internet Archive’s Wayback Machine you can still access an early version (August 2006) of my German website to spread knowledge about such tools.
Briefly, I recommend that you (1) educate yourself about and then (2) use free (not necessarily gratis) software to protect your online privacy, namely the Tor Browser for Web surfing, GnuPG to encrypt your e-mail, and messengers based on XMPP such as Conversations (or alternatively Signal).
Thursday, 13 April 2017
Henri Bergius | 00:00, Thursday, 13 April 2017
- Independent development and release lifecycle for each microservice
- Ensuring clear API boundaries between systems
- Ability to use technologies most applicable for each area of a system
In an ideal world, microservices are a realization of the Unix philosophy as applied to building internet services: writing programs that do one thing, and do it well; writing programs that work together.
Just use a message queue
However, when most people think about microservices, they think systems that communicate with each other using HTTP APIs. I think this is quite limited, and something that makes microservices a lot more fragile than they could be. Message queues provide a much better solution. This is mentioned in Martin Fowler’s Microservices article:
The second approach in common use is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don’t do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services.
When we were building The Grid, we went with an architecture heavily reliant on microservices communicating using message queues. This gave us several useful capabilities:
- Asynchronous processing
If you have heavy back-end operations or peak load, a microservice might be swamped with work to be done. If your web server only needs to send work to a queue and not wait for result immediately, you have a lot more freedom on how to organize the work. Furthermore, you can split your service along different scalability characteristics — some systems may be network-bound, others CPU-bound etc
Since the work to be performed by your microservices is kept in message queue, you can use the combination of the queue length and typical processing times to automatically determine how many instances of each service you need. Ideally you can use this to ensure that your processing times stay consistent regardless of how many users are on your system at a given time
- Dead lettering
It is possible to configure RabbitMQ to place any failed operations into a dead letter queue. This gives you a full record of any failing operations, making it possible to inspect them manually, replay later, or produce new tests based on real-world failures
- Rate limiting
Sometimes you’re dealing with external HTTP APIs that are rate limited. Once a microservice starts hitting rate limits, you can keep new requests in a queue until the limit lifts again
- In-flight updates
A message queue can be configured so that non-acknowledged messages go back into the queue. This means that if one of your services crashes, or you deploy a new version of it, no work gets lost. When the service is back up again, it can pick up right where the previous instance left off
HTTP is something all backend developers are familiar with. With message queues you have to deal with some new concepts, and hence new tools are also needed.
Client libraries for talking with common message queues exist for pretty much every language. However, this still means that you’ll have to handle things like message pre-fetch limits, acknowledging handled messages, and setting up queues yourself. And of course keeping track of what service talks to what can become a burden.
We developed the MsgFlo tool to solve these problems. MsgFlo provides open source client libraries for bunch of different programming languages, providing a simple model to handle message-related workflow.
To give you an overview of the dataflow between services, MsgFlo also provides a way to define the whole system as a flow-based programming graph. This means you’ll see the whole system visually, and can change connections between services directly from graphical tools like Flowhub.
As mentioned above, queue-based microservices make autoscaling quite easy. If you’re on Heroku, then our GuvScale can be used to automate scaling operations for all of your background dynos.
I wrote more about GuvScale in a recent blog post.
If you’d like to explore using message queues for your services a bit more, here are couple of good articles:
There are also some MsgFlo example applications available:
Wednesday, 12 April 2017
Jens Lechtenbörger » English | 14:23, Wednesday, 12 April 2017
A few weeks ago I installed Qubes OS on my PC at work. The project’s self-description is as follows:
Qubes is a security-oriented, free and open-source operating system for personal computers that allows you to securely compartmentalize your digital life.
Essentially, under Qubes you run different virtual machines (VMs), which are more or less isolated from each other, for different purposes. For example, you can use a so-called vault VM (that has no network connection) with Split GPG to keep your GnuPG keys in a safer place than would usually be possible on a single OS (you do encrypt your e-mails, don’t you?). Qubes also includes Whonix, a desktop OS that itself is based on virtualization to provide an environment from which all network traffic is automatically routed through the Tor anonymization network. In case you do not know Tor yet, I recommend that you invest some time to learn about that project and its role for digital self-defense.
VMs in Qubes are started from so-called templates that cannot be modified from inside the VM. So if you install software inside a VM (or some malware does so), those changes will be reverted when you close the VM.
A major feature of Qubes is the so-called disposable VM (dispVM for short) mechanism. A dispVM can be started quickly from a fresh template to host a single, potentially dangerous application such as a media player or an office tool; once the application exits, the dispVM (including potential changes from the template) is destroyed. The dispVM functionality also includes services that convert untrusted PDF or image files to a trusted format which can be viewed safely in other VMs. Finally, from inside your “normal” VMs you can also start a dispVM application on a designated file of the “normal” VM; if you change the file’s contents inside the dispVM, the changed file version replaces the original version of the “normal” VM when the dispVM is destroyed. For example, you can open and edit
doc files saved from e-mail attachments (which are potentially dangerous) in LibreOffice inside a dispVM.
All of the above is pretty cool, and I use those features on a daily basis. By default, however, those feature are integrated into applications that I do not use, such as Thunderbird for e-mail or Nautilus as file manager. For my favorite work environment, namely GNU Emacs, some configuration is necessary.
For GnuPG in Emacs, you should really use EasyPG, which has been the default for some years. To make use of Split GPG in Qubes, you need to configure
epg-gpg-program to invoke a wrapper program that communicates with the vault VM:
(customize-set-variable 'epg-gpg-program "/usr/bin/qubes-gpg-client-wrapper")
The above configuration is sufficient if you compiled Emacs from the Git repository (March 2017, after bug#25947 was fixed). Otherwise, you need this:
(require 'epg-config) (customize-set-variable 'epg-gpg-program "/usr/bin/qubes-gpg-client-wrapper") (push (cons 'OpenPGP (epg-config--make-gpg-configuration epg-gpg-program)) epg--configurations)
If you rely on signatures for Emacs’
package mechanism and if your Emacs is recent enough to have the variable
package-gnupghome-dir (April 2017), you need to customize that to nil:
(setq package-gnupghome-dir nil)
Otherwise, as a temporary fix you may want to modify the script
qubes-gpg-client-wrapper to ignore the unsupported option
--homedir (in the template VM, similarly to how
keyserver-options are removed with a comment on Torbirdy compatibility).
For an integration of dispVM functionality into Gnus and Dired, you may want to take a look at qubes.el. Briefly, that library provides functionality to browse URLs and open or convert files and e-mail attachments in various VMs, depending on user customization.
Here is my relevant snippet from
(require 'qubes) (setq browse-url-browser-function 'qubes-browse) ;; Also allow to open PDF files in Disposable VMs. ;; Add the following line to ~/.mailcap: ;; application/*; qvm-open-in-dvm %s (require 'mailcap) (mailcap-parse-mailcaps) ;; Define key bindings to work on files in VMs. (add-hook 'dired-mode-hook (lambda () (define-key dired-mode-map "ö" 'jl-dired-copy-to-qvm) (define-key dired-mode-map "ä" 'jl-dired-open-in-dvm) (define-key dired-mode-map "ü" 'jl-dired-qvm-convert) )) (add-hook 'gnus-article-mode-hook (lambda () (define-key gnus-article-mode-map "ä" 'jl-gnus-article-view-part-in-dvm) (define-key gnus-summary-mode-map "ä" 'jl-gnus-article-view-part-in-dvm) (define-key gnus-mime-button-map "ü" 'jl-gnus-mime-qvm-convert-and-display) (define-key gnus-article-mode-map "ü" 'jl-gnus-article-view-trusted-part-via-qubes) (define-key gnus-summary-mode-map "ü" 'jl-gnus-article-view-trusted-part-via-qubes) ))
I chose umlauts for key bindings as
Gnus seem to have assigned bindings for most keys already. Feel free to adapt.
DanielPocock.com - fsfe | 06:43, Wednesday, 12 April 2017
Jonas Öberg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used.
A question of leadership
Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype.
Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did?
When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future.
Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say.
It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it.
The many faces of proprietary software
One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer.
There is no need to give up
Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not.
I don't think Jonas' blog intended to sanction this level of complacency. Every time you come across a piece of software, it is worth considering whether a free alternative exists and whether the software is really needed at all.
An orderly migration to free software
In our professional context, most software developers come across proprietary software every day in the networks operated by our employers and their clients. Sometimes we have the opportunity to influence the future of these systems. There are many cases where telling the client to go cold-turkey on their proprietary software would simply lead to the client choosing to get advice from somebody else. The free software engineer who looks at the situation strategically may find that it is possible to continue using the proprietary software as part of a staged migration, gradually helping the user to reduce their exposure over a period of months or even a few years. This may be one of the scenarios where Jonas is sanctioning the use of proprietary software.
On a technical level, it may be possible to show the client that we are concerned about the dangers but that we also want to ensure the continuity of their business. We may propose a solution that involves sandboxing the proprietary software in a virtual machine or a DMZ to prevent it from compromising other systems or "calling home" to the vendor.
As well as technical concerns about a sudden migration, promoters of free software frequently encounter political issues as well. For example, the IT manager in a company may be five years from retirement and is not concerned about his employer's long term ability to extricate itself from a web of Microsoft licenses after he or she has the freedom to go fishing every day. The free software professional may need to invest significant time winning the trust of senior management before he is able to work around a belligerant IT manager like this.
No deal is better than a bad deal
People in the UK have probably encountered the expression "No deal is better than a bad deal" many times already in the last few weeks. Please excuse me for borrowing it. If there is no free software alternative to a particular piece of proprietary software, maybe it is better to simply do without it. Facebook is a great example of this principle: life without social media is great and rather than trying to find or create a free alternative, why not just do something in the real world, like riding motorcycles, reading books or getting a cat or dog?
Burning bridges behind you
For those who are keen to be the visionaries and leaders in a world where free software is the dominant paradigm, would you really feel satisfied if you got there on the back of proprietary solutions? Or are you concerned that taking such shortcuts is only going to put that vision further out of reach?
Each time you solve a problem with free software, whether it is small or large, in your personal life or in your business, the process you went through strengthens you to solve bigger problems the same way. Each time you solve a problem using a proprietary solution, not only do you miss out on that process of discovery but you also risk conditioning yourself to be dependent in future.
For those who hope to build a successful startup company or be part of one, how would you feel if you reach your goal and then the rug is pulled out underneath you when a proprietary software vendor or cloud service you depend on changes the rules?
Personally, in my own life, I prefer to avoid and weed out proprietary solutions wherever I can and force myself to either make free solutions work or do without them. Using proprietary software and services is living your life like a rat in a maze, where the oligarchs in Silicon Valley can move the walls around as they see fit.
Monday, 10 April 2017
DanielPocock.com - fsfe | 20:01, Monday, 10 April 2017
Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).
Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler.
While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life.
Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere.
Please prove me wrong
In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared.
Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit".
What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage.
The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London.
One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed.
In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born.
A deeper insight into Turing's fate
With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died).
In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds.
In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street.
It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?
Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich:
Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance?
It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect?
Playing ping-pong with children
If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment.
When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it?
How do British politicians really view migrants?
Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear.
According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships.
While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution.
Who is Britain's next Alan Turing?
Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era.
Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however.
Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.
Friday, 07 April 2017
free software - Bits of Freedom | 08:12, Friday, 07 April 2017
The non-profit sector has come a long way over the past decades. A 1999 study1 showed that in a few selected countries, Germany and France included, the non-profit sector employed about 5% of the total workforce. Taking this as the starting point, Helmut Anheier (Yale University), developed his thoughts about non-profit management in his paper Managing non-profit organisations:
Towards a new approach2 (2000).
Nonprofit organisations are subject to both centralising and decentralising tendencies. For example, environmental organisations are often caught between the centralising tendencies of a national federation that emphasises the need to “speak with one voice” in policy debates, and the decentralising efforts of local groups that focus on local needs and demands. - Helmut Anheier2
Reading this, it immediately struck me how similar the FSFE is in this regard and it prompted me to reflect upon some of the tensions which Anheier has identified and to look at this in terms of where the FSFE could be heading in the future in terms of its management model.
According to Anheier, organisations in need of management principles are prone to copy-cat behavior. This can be non-profits that are no longer trivial but carry political and economic weight and thus discover the need for management. Looking outside of the organisation at models they believe to be successful, non-profits mimic management practices of those successful models.
So which models do non-profits copy? Traditionally, non-profits tended to copy the model from public agencies, turning non-profit organisations into quasi-public institutions. That's in part why larger, traditional, non-profit organisations have a governance structure that mimic democratic societies: local governance, elections, representative assemblies, and so on.
In the 1990s, governments were perceived to be weak and thus non-profits took inspiration in its governance and management structures from what was then seen as successful: for-profit businesses. Anheier believes this in part is why especially US non-profits are run with largely the same management principles as for-profit businesses. But there's a problem of course: what non-profits learn from businesses is financial management, which in the business world is aimed at for-profit maximisation, not exactly compatible with non-profits ideological pursuit.
Financial management is first and foremost formal management, not management of purpose and mission, i.e., those very aspects that are the raison d’être of non-profit organisations. The copying of business models and practices into the world of non-profit management has – for better or worse – made inroads via the financial route primarily, and less so on other, equally legitimate avenues. - Helmut Anheier2
Financial management in for-profits makes perfect sense as it forms the link between several of the stakeholders involved in the transactions: between sellers and buyers (product costs), between employers and employees (wages), between shareholders and management (dividends), and between the organisation and the public (taxes). Where for-profits have a largely financial bottom line, non-profits not only has a financial bottom line, they have a multitude of other bottom lines.
A non-profit organisation has several bottom lines because no price mechanisms are in place that can aggregate the interests of clients, staff, volunteers and other stakeholders that can match costs to profits, supply to demand, and goals to actual achievements - Helmut Anheier2
Cutting through the organisation, Anheier demonstrates how this is reflected on the many layers of a non-profit. Starting with that the board of a non-profit, just as the FSFE's members, largely focus on the mission of the organisation and where the management and financial matters are vested in the executive management.
Non-profits also have a highly complex interplay between staff, volunteers and stakeholders, both in terms of their engagement and in their motivation. We can see this clearly in the FSFE as well, in that the organisational environment is complex, there are different expectations and motivations between our local supporters (largely volunteers) and our international work (largely professional staff).
Not surprisingly, these different bottom lines, are also often the ones where conflicts in non-profit organisation appear. One management style can not by itself satisfy the different bottom lines and various management styles can not only be envisioned, but are actually needed within each non-profit.
Rather than focusing on one of these bottom lines, management in non-profits become a manner of orienting and steering the organisation to position it within the management landscape, and to continuously adjust that position according to needs. Anheier has identified four dimensions relevant to consider for non-profit management:
Tent or palace?
A palace organisation values predictability over improvisation, dwells on constraints rather than opportunities, borrows solutions rather than inventing them, defends past action rather than devising new ones, favours accounting over goal flexibility.
Typical palace organisations can be service-providing non-profits, think-tanks and larger foundations. Tent organisations in contrast, are focused on creativity, immediacy, and initiative. They shy away from authority, escapes permanence, focuses on the here and now. These are typically civic action groups, citizen engagement groups, theater troops, and so on.
Technocratic culture or social culture?
The technocratic organisation "emphasise functional performance criteria, task achievement, set procedures and operate under the assumption that organisations are problem-solving machines." Social organisation emphasise "families" rather than machines. Religious and political organisation are traditionally more aligned towards social culture organisations, whereas hospitals and schools are more technocratic organisations.
Hierarchy or network?
In a hierarchy, decision making is centralised and top-down. In a network organisation, decision making is bottom-up. If talking about teams, clusters, horisontal relationships, and so on, we're often talking about different types of network organisations.
Outer-directed or inner-directed?
Inner-directed organisations tend to have a narrow view of its environment. They look inside the organisation and focus on their own objectives and world-view. Outer-directed organisations by contrast react strongly and immediately to the broader environment: they look at other organisations and the world outside of the organisation and seek solutions there.
What does this mean for us?
The challenge of non-profit management, then, is to balance the different, often contradictory elements that are the component parts of non-profit organisations. How can this be done? In a first step, management has to locate and position the organisations in the complex push-and-pull of divergent models and underlying dilemmas and choices. Following such a position analysis, management can ask: “Is this where we want to be? Are we too much like a palace, too hierarchical, too technocratic and too outer-directed? Should we be more tent-like, more organised as networks, with a socio-culture emphasises and our own resources and capabilities?” In this sense, we can easily see that non-profit management becomes more than just cost-cutting and more than just the exercise of financial control. Management becomes concerned with more than just one or two of the numerous bottom lines non-profit organisations have. In other words, management becomes not the controlling but the creative, enabling arm of non-profit organisations. - Helmut Anheier2
If looking at the FSFE today, I would argue that we are, at the moment, somewhere on the crossroads between a tent and a palace: neither here nor there, we're also somewhere in between a technocratic organisation and social organisation: perhaps somewhat more towards technocratic than social. We're also neither a network nor hierarchy. Our local groups are largely autonomous, but in terms of what's publicly communicated through our web pages and news in terms of the work we do, this feels largely driven by a hierarchical model.
And finally, we are rather inner-directed. Not strictly, but we to tend to emphasise our own objectives and world-view, and if I were to pinpoint something we'd need to change at this point, it would be to become more outer-directed. Here's what I think this chart looks like in terms of the difference between the FSFE and our sister organisation the FSF.
In my view, both organisation are rather inner-directed and technocratic but the FSFE is still more of a network than a hierarchy, and more of a tent than a palace. But more interestingly, if we look only at the FSFE, where would I want to see the FSFE placed in the future? This is what I'm aiming for at the moment:
At the moment, I'm quite comfortable with the FSFE being neither a tent nor a palace. We have both views in the organisation, and act in different ways at different times. I think that's fine. I do think however we need to develop more in an outward-direct way. To look more at the environment around us and let that be our guide to determine our actions, rather than focusing too much inward.
I also believe we can become more social, and most important, we can become more like a network and less like a hierarchy. But moving in this direction is not a straight path, and this is perhaps where I would add something to the model Anheier presents. I believe network organisations work best when there's an underlying structure which provide the foundation for the network.
But that's also why I don't see the organisation becoming much less technocratic: the functional performance, set procedures and task achievements, which are second-nature to technocratic organisations, are part of the foundation which makes a network organisation work. It's a network organisation where local and thematic groups not only have autonomy, but they know they have that autonomy because the procedures of the organisation give it to them.
Monday, 03 April 2017
TSDgeos' blog | 22:08, Monday, 03 April 2017The Akademy 2017 Call for Papers ends April 10th at 23:59:59 CEST.
Surely you have interesting stuff to share with the community, so go to https://akademy.kde.org/2017/cfp and submit a proposal!
free software - Bits of Freedom | 07:29, Monday, 03 April 2017
This past weekend, I participated in a training for "Skogsmulle" leaders and it gave me a first hand view of why focusing on leadership of free and open projects is exactly the right thing to do. To give you some context before I talk about why this is relevant for free and open source projects -- and indeed many other projects and organisations as well -- I will need to share the Swedish outdoors with you. "Friluftsfrämjandet" is the largest outdoor association in Sweden and present in almost every nook and cranny of the country.
One of the activities they arrange, for which they're most famous, is "Skogsmulle" and "Skogsknytte", groups of children aged about 5-6 and 3-4 years old respectively. For about two hours during 6-10 occasions, children get to experience the magic of the outdoors. They learn about nature, the right of public access, and what it means in terms of showing respect and care for the outdoors, for other people and animals under the motto of "don't disturb - don't destroy."
One favorite activity has always been to take a number of different objects (plastic, organic, metal, etc), nail them to a board and place it in a known location in the forest. Visiting this board every other occasion, the children are invited to talk and reflect about what happens to the different objects when we leave them in the forest.
What I believe makes this whole machinery work, and why the association has been so successful, is its attention to educating its leaders. In order to be eligible to organise a group of "Skogsmulle", I was asked to participate in a three day course and to listen to a screencast of the associations' fundamental values.
The course consisted of everything you need to know to carry out an activity with children: we learned the songs to sing, the rhymes to.. rhyme, the games to play, cooking on portable stoves, erecting wind shelters, and more. When I look back on these three days I believe I can easily count about 19 different games we played outside. I'm told it was quite a sight to see a group of 18 adult men and women play games designed for 3-6 year olds!
But the course wasn't only about what you need to do with the children. It also contained sessions about how to behave as a leader, what the values of the association are that you represent, how to plan your activities, the policies around the right of public access to nature, how the association works politically to affect change and what those policy goals are, how you evaluate and develop your own leadership, how the participants are insured, safety and security for participants, policies about the right to privacy (photographs, social media, etc), and what to do and how to act when accidents happen.
Participating in this, and reflecting on the fact that a lot of other organisations I've been part of also had similar leadership programs, makes me wonder: why don't we? Why don't we as free and open source projects, and why don't we in the Free Software Foundation Europe offer this to our participants?
Reflecting on this, it seems almost absurd that we ask someone to coordinate a local group without sharing with them the values of the organisation we're asking them to represent. That we don't pass on to them the activities we've seen work in other local groups, and that we don't give them the tools to improve themselves in their role as coordinator.
On this, I believe we have some work to do, and I believe the organisation will only be stronger once we do. Not only will we pass on the knowledge and experience needed to lead our volunteer activities, we'll also help our leaders associate with the organisation, to learn more about its values, it's beliefs and our policies, so they in turn can be more effective in spreading our message to the broader public.
Saturday, 01 April 2017
Matthias Kirschner's Web log - fsfe | 06:09, Saturday, 01 April 2017
This week I watched an old Tatort ("crime scene") episode from 2015. For those unfamiliar with it, Tatort is a German police procedural television series running on public TV since November 1970.
I am always very interested in what is running on computers in movies and TV shows; most of the time I am hoping for nice Free Software window managers, some programs like compilers, htop, or IRC sessions running in the background which do not make any sense at all in the context. Maybe some friends are a bit annoyed by that, as I sometimes stop the movie and rewind it so I can have a closer look. (As this point sorry to them for my habit.)
In this Tatort the detective Ms Brand is into computers, while her colleague Mr Borowski is absolutely not. After handing over the mobile phone of the headless victim with the note "that's something for you Ms Brandt, no head but a mobile phone", she connects it to her computer to analyse it.
That is when I got interested. She had a wallpaper with Edward Snowden, and the text "American Idol". But I could not identify the other icons on the screen. Afterwards I found out that in 2015 people also noticed that, but apparently nobody did shared a screenshot. So I made one:
Screenshot from Tatort "Borowski und der Himmel über Kiel" at 1:17:24
If you have Free Software related screenshots of movies, please share them with me.
Friday, 31 March 2017
free software - Bits of Freedom | 06:34, Friday, 31 March 2017
Two weeks ago, I wrote about how free and open source software is failing its users. In the discussion afterwards, @kofish provided an insightful thought
I don't see free software as for end-users. I prefer a free software community of people creating software that they want to use and pushing limits. It's knowledge sharing that sometimes produces end-user friendly software. - @kofish
This of course, goes somewhat contrary to what the FSF and Richard Stallman consider free software: "Free software means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software." (my emphasis) 1
I believe it's the interaction between users and developers which explain where free and open source software excels and where it fails. And it fails, emphatically, exactly in the areas where most observers ask: why aren't there any free and open source solutions in this area?
Any software project is in need of two things: developers and users. There's a bit more to the mix than that of course, but you do need at least these two bits, and a working relation between them. For some software, such as infrastructure tools, the user is the developer, and as such, there is no conflict between them.
It is in areas like this that free and open source software projects excel. When the user is able to participate in the project as a developer, it helps forge a bond of deep user engagement. For some users, this isn't possible, and there needs to be other ways of engagement.
Paul Boddie hinted at this in a response to some of the thoughts I outlined in my previous post.
Users do need to be able to engage with Free Software projects within the conventions of those projects, of course. But they also need the option of saying, “I think this could be better in this regard, but I am not the one to improve it.” And that may need the accompanying option: “Here is some money to pay someone to do it.” - Paul Boddie
Hiring someone to do the work for you isn't always easy. Even with money in hand, finding someone to actually do the job isn't trivial. There's often no one to call or email. Sometimes there's not even a mailing list to join. Even if there is: sending your request for (possibly trivial) help to a group of people you don't know is daunting to say the least.
When user's aren't able to get the support they need in ways which they are comfortable with, even if they're willing to put up the money for it or engage with the project to support it, the relation between developers and users break down, to the detriment of both.
Let's look at three examples of where free and open source software excels and consider what the user relation look like in each.
Infrastructure: We've already mentioned this, but I'll repeat it. In infrastructure software, to a large extent, the user is the developer, and is able to interact and relate to the developer community in a way which is comfortable for both.
Automotive: I'd be hard pressed to name any new car which doesn't contain some free and open source software. But there's no expectation of that you, as the driver, should interact with the community having developed the software used in the car. You'd turn to the manufacturer of the car for this. So from a user perspective, you know who to call when something doesn't work.
The manufacturer may or may not be able to help you, but regardless, there's an established protocol for this relation and on average, it tends to work. When a manufacturer accepts there's a problem which needs fixing, it becomes their problem to fix it. And regardless of the underlying cause is in a free and open source software component or not, there's a support structure in-house to fix the problem and deliver an updated version.
Software as a Service: Think Gmail, Github, all of these service offerings that are sprawling across the Internet today. Here again, I'd say that the vast majority of these are powered by free and open source software. As there's usually no requirement for the provider to offer the source code to the service to the user, there's also no expectation of, as a user, being able to fix a problem you have with the service by patching the source code.
That option just doesn't exist, and your only ability to influence the service you're getting is typically to buy the "professional" version of the service which may or may not include some support option. Here again though, we end up in the same situation as with automotive: user relations become customer relations and once problems are identified and agreed to be fixed, the in-house support structures of the service provider take care of fixing the problem regardless of in what software component it exists.
In neither of the two latter cases though, the user has any significant freedom. In most cases, the user is simply taken out of the equation and turned into a customer. And any company or organisation delivering software know how to manage customers.
I'm making the claim that where free and open source software fails, is where there's an expectation of the user contributing to the software, but where there's no ability for the user to do just that.
There are very few accounting software packages as free and open source software, not because there's no need for them, but simply because our current model necessitates the user being the developer, and there just aren't that many software developer accountants out there. The packages which do exist tends to cater exactly to the group of people who are or employ developers. Free and open source software on the desktop is similarly languishing because most users' of desktops can not contribute to the development.
Our reaction to this can be one of two.
We can accept that most users of the software developed will be customers and setup our projects to manage customers, not users. Doing this means providing the source code in a Git repository isn't enough. We'd need to actually deliver software, and design the interaction with the users as that of interactions with customers. And we'd need to accept that for most users, getting software under a free and open source license isn't going to be terribly important.
Or, if we do want users of a software to be able to enjoy the freedoms offered by the licenses we love, we need to change the development model to enable user's to really and truly contribute to software development. This is certainly possible, but it's an arduous task and my gut feeling is that even if we did, most of our users would actually want to be customers any way.
The two aren't necessarily mutually exclusive: we can (and should) make it easier for users to contribute to software. And we should keep offering software under free and open source licenses. Those who can and want to engage in the development can get the software under an open license, and participate in the development community. For those who do not want or can, we need to provide them with the option of being a customer.
Henri Bergius | 00:00, Friday, 31 March 2017
As I’m preparing for a NoFlo talk in Bulgaria Web Summit next week, I went through some older videos of my conference talks. Sadly a lot of the older ones are not online, but the ones I found I compiled in playlists:
Decoupling Content Management with Create and PHPCR from SymfonyLive Paris 2012:<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/dVFKgWL_TD8?list=PLIuD0578pkZ4Ciu9DNkRMG9yvFrEdVby7" width="560"></iframe>
If you find any I’ve missed, please let me know!
Thursday, 30 March 2017
polina's blog | 14:46, Thursday, 30 March 2017
Copyright reform is in its full force. After the European Commission (EC) revealed its notorious plans to modernise copyright to address
new dying businesses age back in September 2016, the proposal has slowly moved to the hands of the European Parliament (EP).
With Malta being currently in charge of the EU council and copyright reform being its top priority, Maltese MEP Therese Comodini Cachia is leading EP efforts in copyright on behalf of the Committee of Legal Affairs (JURI). Her efforts are being noteworthy: from the task to bring the
difficult over-lobbied copyright reform to a more balanced form, to the attempt to do it in a fully transparent way (her Draft Report includes an annex of numerous lobby and advocacy groups who advised Comodini Cachia on the way). A 58-page long report is a lot to handle, especially for the weak-nerved. However, not less noteworthy, and I would say, even more remarkable is the Draft Opinion of the Scottish MEP Catherine Stihler of the Committee on the Internal Market and Consumer Protection (IMCO). Only slightly shorter from Comodini Cachia’s draft report (in length, but not on the content), a 43-pager includes several points and proposals, that JURI report is unfortunately falling short on.
Text and data mining
One of the slightly more progressive points in the EC proposal is the proposed new mandatory exception for text and data mining (TDM) (Article 3). Well, at least the EC tried to give TDM a mandatory exception, but the efforts were completely neutralised in the same article by granting the rightholders an extremely broad right to apply any necessary technical protection measure (TPM) in order to “secure” their works. In addition to that, the scope of the new exception is extremely limited: the mandatory exception concerns only research organisations with the lawful access to the copyrighted works, and excludes everyone else with the same lawful access to copyrighted material.
The Draft Opinion rightfully states that “a more challenging approach” in regards to TDM exception should have been taken by the European Commission:
“[...]the Rapporteur believes that limiting the proposed EU exception to a narrow definition of research organisations is counterproductive”
Furthermore, the rapporteur goes even further by attempting to limit the scope of dangerously far-reaching technical protection measures and proposes:
“a simple rule, which does not discriminate between users or purposes and ensures a strictly limited and transparent usage of technological protection measures where appropriate[emphasis added]“
As such, the suggested amendments to the EC proposal by the rapporteur Stihler expand the notion of TDM exception beneficiaries to include everybody, as in “any individual or entity, public or private, with lawful access to mine content”.
Concerning TPM, the rapporteur strictly limits their far-reaching scope in the amendment 32 by stating that:
“any contractual provision or technical protection contrary to the exception[...] shall be unenforceable.”
This is a significant improvement to the existing TPM and other digital ‘restrictions’ management (DRM) system established by the InfoSoc Directive (2001/29/EC); and to the currently proposed text in the EC copyright proposal.
The limited scope of TPM is slightly watered down in the Amendment 33 where the rapporteur proposes to strictly allow rightholders to apply TPM but only in the cases when these are to ensure security and integrity of the networks and databases where the works are hosted. The proposed amendment shifts the focus of the EC suggested paragraph to a more favourable situation when first, TPM cannot be used to interfere with legitimate exercise of TDM. Second, when TPM are to be used, then only in exceptional circumstances for network and database security.
The Draft Report by Comodini Cachia proposes to delete the mentioning of TPM in TDM exception. This is a semi-favourable approach, as essentially avoiding TPM does not address its shortcomings in existing framework established by Info Soc directive that does not protect any exceptions from far-reaching TPM. Hence, the explicit restriction on digital restriction as proposed by the draft opinion is a much more stronger stance in the right direction.
Publishers’ neighbouring right
EC initially proposed a completely arbitrary quasi-copyright for press publishers (“neighbouring right”) for the digital uses of works authored by journalists, justifying it with the need to “ensure the sustainability of the press publishing sector” at the expense of online services (Article 11). This is a clear indication of copyright gone too far, when being an author is not enough in order to reap benefits of one’s intellectual creation. In fact, authors are not even once mentioned in the EC proposal. The focus of copyright in the digital age, according to the EC, has shifted to industries and ‘intermediaries’ in a broader sense.
The rapporteur Stihler is, however, not convinced by the EC reasoning, and believes that “introduction of a press publishers right under Article 11 lacks sufficient justification.”
The Rapporteur believes that there is no need to create a new right as publishers have the full right to opt-out of the ecosystem any time using simple technical means.[...] There are potentially more effective ways of promoting high-quality journalism and publishing via tax incentives instead of adding an additional layer of copyright legislation.
The proposed changes to the EC text are, thus, deletion of Article 11 that introduces new ancillary right to press publishers.
The Draft Report, unfortunately, is less ambitious in this regard. The rapporteur Comodini Cachia proposes to replace neighbouring rights for press publishers with the right to bring proceedings in their own name before tribunals against infringers of the rights held by the authors of the works contained in their press publication and to be presumed to have representation over the works contributed. This essentially gives publishers the right to use other people’s copyright to go after everybody they consider to be an ‘infringer’.
Current framework for Internet Service Providers’ (ISP) liability for the actions of third parties who use their services to infringe copyright is set in e-Commerce Directive (2000/31/EC). According to e-Commerce rules, the ISPs enjoy certain amount of ‘safe harbour’ when it comes to the actions performed by their users. In addition to the ‘safe harbour’ there is no general monitoring obligation to actively seek for copyright infringements on their services, inter alia prohibited by the European Court of Justice. However, the IPS need to act accordingly as soon as they obtain knowledge (or should have obtained such knowledge) of potential copyright infringement on their services (so-called notice-and-take-down procedure).
With the copyright reform, the EC promised not to touch e-Commerce and the rule of intermediary liability, however, the proposal indicated the opposite. The EC imposed on *any* ISP that *stores and provides to the public access to large amounts of works or other subject matter uploaded by their users* an obligation to take [technical] measures to ensure “functioning of agreements concluded with rightholders for the use of their works”. The EC is even going further by explicitly mentioning the technology that might be compliant with that article: content ID, mostly used by YouTube (aka Google). It is noteworthy, that Google itself has stated that technology is not the answer when it comes to such sensitive matter as copyright exceptions and users’ rights. Furthermore, technology cannot be built to do lawyers’ job and verify whether the claimed content actually belongs to rightholder.
The aforementioned provision has received much of a backlash, mostly due to the fact that it goes against and beyond e-Commerce Directive, and all existing jurisprudence concerning prohibited monitoring obligation, making the proposal, hence, highly questionable in legal sense.
The rapporteur Stihler has duly recognised the imbalance and the inherent incompatibility of Article 13 and existing legal framework:
The Rapporteur firmly supports the notion that the value gap has to be addressed and emphasises that creators and rights holders are to receive a fair and balanced compensation for the exploitation of their works from online service providers. However, this should be achieved without negative impacts on the digital economy or internet freedoms of consumers. The current wording of Article 13 fails to achieve this.
The proposed measures are also very technologically specific, according to the rapporteur Stihler:
The use of filtering potentially harms the interests of users, as there are many legitimate uses of copyright content that filtering technologies are often not advanced enough to accommodate.
The proposed Article 13 is, hence, modified by the Amendment 63 that recognises the ‘safe harbour’ provisions and the prohibition of monitoring obligation under the e-Commerce Directive.
The Draft Report also addresses the Article 13 in a slightly similar manner, however, it does not address the technical neutrality in the same way as the Draft Opinion. The ‘measures’ the amended Article 13 is referring to are not balanced with prohibition of monitoring obligation as in Draft Opinion. In addition, the focus of the article in IMCO Draft Opinion is completely shifted to the balanced agreements between service providers and rightholders, while the JURI Draft Report follows the rationale of the EC proposal: appropriate and proportionate measures to ensure the functioning of agreements concluded with rightholders for the use of their works. While the difference seems to be minimal, it is actually crucial to shift the discourse in a more technologically neutral way by emphasising the conclusion of ‘balanced agreements’, rather than an obligation to impose merely technical ‘measure’.
User generated content
In addition, the Rapporteur Stihler proposes a completely new exception in the Article 13 specifically designed for ‘user generated content’ (UCG; amendment 66) that is completely missing from the EC proposal, nor the JURI Draft Report:
“[...]in order to allow natural persons to use an existing work or other subject matter in the creation of a new work or other subject- matter, and use the new work or other subject matter, provided that:
(a) the work or other subject-matter has already been lawfully made available to the public;
(b) the use of the new work is done solely for non-commercial purposes;
(c) the source -including, if available, the name of the author, performer, producer, or broadcaster -is indicated;
(d) there is a certain level of creativity in the new work which substantially differentiates it from the original work”
UCG is a pure creation of online environment and its recognition is one step towards a reasonable modernised copyright framework that adapts to the changing digital environment and realities, instead of frantically trying to punish us for moving forward in our creative thinking online.
IMCO Draft Opinion has addressed several shortcomings of the EC copyright proposal, however, it is only one of the voices within the Parliament. Consequently, it is still up for vote in the committee after another round of amendments from its members. On the other hand, the aforementioned Comodini Cachia’s darft report – the main Parliamentary effort – is less ambitious in its wording, although it does attempt to address the main shortcomings of the EC proposal. The copyright saga is, hence, far from over.
Wednesday, 29 March 2017
Brexit: If it looks like racism, if it smells like racism and if it feels like racism, who else but a politician could argue it isn't?
DanielPocock.com - fsfe | 05:33, Wednesday, 29 March 2017
Since the EU referendum got under way in the UK, it has become almost an everyday occurence to turn on the TV and hear some politician explaining "I don't mean to sound racist, but..." (example)
Of course, if you didn't mean to sound racist, you wouldn't sound racist in the first place, now would you?
The reality is, whether you like politics or not, political leaders have a significant impact on society and the massive rise in UK hate crimes, including deaths of Polish workers, is a direct reflection of the leadership (or profound lack of it) coming down from Westminster. Maybe you don't mean to sound racist, but if this is the impact your words are having, maybe it's time to shut up?
Choosing your referendum
Why choose to have a referendum on immigration issues and not on any number of other significant topics? Why not have a referendum on nuking Mr Putin to punish him for what looks like an act of terrorism against the Malaysian Airlines flight MH17? Why not have a referendum on cutting taxes or raising speed limits, turning British motorways into freeways or an autobahn? Why choose to keep those issues in the hands of the Government, but invite the man-in-a-white-van from middle England to regurgitate Nigel Farage's fears and anxieties about migrants onto a ballot paper?
Even if David Cameron sincerely hoped and believed that the referendum would turn out otherwise, surely he must have contemplated that he was playing Russian Roulette with the future of millions of innocent people?
Let's start at the top
For those who are fortunate enough to live in parts of the world where the press provides little exposure to the antics of British royalty, an interesting fact you may have missed is that the Queen's husband, Prince Philip, Duke of Edinburgh is actually a foreigner. He was born in Greece and has Danish and German ancestry. Migration (in both directions) is right at the heart of the UK's identity.
Home office minister Amber Rudd recently suggested British firms should publish details about how many foreign people they employ and in which positions. She argued this is necessary to help boost funding for training local people.
If that is such a brilliant idea, why hasn't it worked for the Premier League? It is a matter of public knowledge how many foreigners play football in England's most prestigious division, so why hasn't this caused local clubs to boost training budgets for local recruits? After all, when you consider that England hasn't won a World Cup since 1966, what have they got to lose?
All this racism, it's just not cricket. Or is it? One of the most remarkable cricketers to play for England in recent times, Kevin Pietersen, dubbed "the most complete batsman in cricket" by The Times and "England's greatest modern batsman" by the Guardian, was born in South Africa. In the five years he was contracted to the Hampshire county team, he only played one match, because he was too busy representing England abroad. His highest position was nothing less than becoming England's team captain.
Are the British superior to every other European citizen?
One of the implications of the rhetoric coming out of London these days is that the British are superior to their neighbours, entitled to have their cake and eat it too, making foreigners queue up at Paris' Gare du Nord to board the Eurostar while British travelers should be able to walk or drive into European countries unchallenged.
This superiority complex is not uniquely British, you can observe similar delusions are rampant in many of the places where I've lived, including Australia, Switzerland and France. America's Donald Trump has taken this style of politics to a new level.
Look in the mirror Theresa May: after British 10-year old schoolboys Robert Thompson and Jon Venables abducted, tortured, murdered and mutilated 2 year old James Bulger in 1993, why not have all British schoolchildren fingerprinted and added to the police DNA database? Why should "security" only apply based on the country where people are born, their religion or skin colour?
In fact, after Brexit, people like Venables and Thompson will remain in Britain while a Dutch woman, educated at Cambridge and with two British children will not. If that isn't racism, what is?
Running foreigner's off the roads
Theresa May has only been Prime Minister for less than a year but she has a history of bullying and abusing foreigners in her previous role in the Home Office. One example of this was a policy of removing driving licenses from foreigners, which has caused administrative chaos and even taken away the licenses of many people who technically should not have been subject to these regulations anyway.
Shouldn't the DVLA (Britain's office for driving licenses) simply focus on the competence of somebody to drive a vehicle? Bringing all these other factors into licensing creates a hostile environment full of mistakes and inconvenience at best and opportunities for low-level officials to engage in arbitrary acts of racism and discrimination.
Of course, when you are taking your country on the road to nowhere, who needs a driving license anyway?
What does "maximum control" over other human beings mean to you?
The new British PM has said she wants "maximum control" over immigrants. What exactly does "maximum control" mean? Donald Trump appears to be promising "maximum control" over Muslims, Hitler sought "maximum control" over the Jews, hasn't the whole point of the EU been to avoid similar situations from ever arising again?
This talk of "maximum control" in British politics has grown like a weed out of the UKIP. One of their senior figures has been linked to kidnappings and extortion, which reveals a lot about the character of the people who want to devise and administer these policies. Similar people in Australia aspire to jobs in the immigration department where they can extort money out of people for getting them pushed up the queue. It is no surprise that the first member of Australia's parliament ever sent to jail was put there for obtaining bribes and sexual favours from immigrants. When Nigel Farage talks about copying the Australian immigration system, he is talking about creating jobs like these for his mates.
Even if "maximum control" is important, who really believes that a bunch of bullies in Westminster should have the power to exercise that control? Is May saying that British bosses are no longer competent to make their own decisions about who to employ or that British citizens are not reliable enough to make their own decisions about who they marry and they need a helping hand from paper-pushers in the immigration department?
Echoes of the Third Reich
Most people associate acts of mass murder with the Germans who lived in the time of Adolf Hitler. These are the stories told over and and over again in movies, books and the press.
Look more closely, however, and it appears that the vast majority of Germans were not in immediate contact with the gas chambers. Even Gobels' secretary writes that she was completely oblivious to it all. Many people were simply small cogs in a big bad machine. The clues were there, but many of them couldn't see the big picture. Even if they did get a whiff of it, many chose not to ask questions, to carry on with their comfortable lives.
Today, with mass media and the Internet, it is a lot easier for people to discover the truth if they look, but many are still reluctant to do so.
Consider, for example, the fingerprint scanners installed in British post offices and police stations to fingerprint foreigners and criminals (as if they have something in common). If all the post office staff refused to engage in racist conduct the fingerprint scanners would be put out of service. Nonetheless, these people carry on, just doing their job, just following orders. It was through many small abuses like this, rather than mass murder on every street corner, that Hitler motivated an entire nation to serve his evil purposes.
Technology like this is introduced in small steps: first it was used for serious criminals, then anybody accused of a crime, then people from Africa and next it appears they will try and apply it to all EU citizens remaining in the UK.
How will a British man married to a French woman explain to their children that mummy has to be fingerprinted by the border guard each time they return from vacation?
The Nazis pioneered biometric technology with the tracking numbers branded onto Jews. While today's technology is electronic and digital, isn't it performing the same function?
There is no middle ground between "soft" and "hard" brexit
An important point for British citizens and foreigners in the UK to consider today is that there is no compromise between a "soft" Brexit and a "hard" Brexit. It is one or the other. Anything less (for example, a deal that is "better" for British companies and worse for EU citizens) would imply that the British are a superior species and it is impossible to imagine the EU putting their stamp on such a deal. Anybody from the EU who is trying to make a life in the UK now is playing a game of Russian Roulette - sure, everything might be fine if it morphs into "soft" Brexit, but if Theresa May has her way, at some point in your life, maybe 20 years down the track, you could be rounded up by the gestapo and thrown behind bars for a parking violation. There has already been a five-fold increase in the detention of EU citizens in British concentration camps and they are using grandmothers from Asian countries to refine their tactics for the efficient removal of EU citizens. One can only wonder what type of monsters Theresa May has been employing to run such inhumane operations.
This is not politics
Edmund Burke's quote "The only thing necessary for the triumph of evil is for good men to do nothing" comes to mind on a day like today. Too many people think it is just politics and they can go on with their lives and ignore it. Barely half the British population voted in the referendum. This is about human beings treating each other with dignity and respect. Anything less is abhorrent and may well come back to bite.
Tuesday, 28 March 2017
tobias_platen's blog | 16:50, Tuesday, 28 March 2017
DVB-T will be switched off on 29/03/2017. I have seen ads for freenet TV,
for which I would need a new DRM-encumbered receiver. Of course I won’t
buy such a thing and use a different technology instead. I have a DVB-C terminal
in one room, but my TV set is in the other one. So I thought I could transmit
the TV signal from the source to a Raspberry Pi connected
to the TV set. On the other side there is a FRITZ!WLAN Repeater DVB‑C which
is compatible to free software Applications such as Kodi and VLC.
From June 2017 the FRITZ!WLAN Repeater DVB‑C is compliant with the directives
2014/53/EU, 2009/125/EC and 2011/65/EU. The first one replaces old directives
and is called the EU Radio Lockdown Directive. This directive harms Freifunk
and many other free software projects. As there is no standard for 5 GHz WLAN
many companies fear that they won’t be allowed to sell those products legally.
The FRITZ!WLAN Repeater DVB‑C also has 5 GHz WLAN, but I only use 2.4 GHz,
because 5GHz WLAN hardware does not work well with GNU/Linux yet. The AR9271
chipset has free firmware and only supports 2.4 GHz. It works well with the
Linux-Libre kernel and it’s firmware can be modified to support new features such as mesh networks.
I recently replaced my compulsory router (an Arcadyan VGV7510KW22) with a FRITZ!Box 7430.
If you have an Enterain or Vodafone TV packet you can also watch TV using VLC.
An IP-TV compatible router (such as a Fritzbox) is needed,
a set-top-box is often included for no additional cost when you switch to NGN.
But I don’t want to use a pay TV product which requires me to use TiVo Hardware.
TiVo (formerly Rovi Corporation and Macrovision Solutions Corporation) is one
of those companies that uses a locked down GNU/Linux operating system on their
digital video recorders (DVR). Macrovision was an analogue copy protection for
video tapes (VHS). Rovi is mentioned in the manual of the Vodafone TV Center.
So if I switch to Vodafone I won’t buy the TV packet, and I may not get the
multicast IP streams which contain the TV program. In any case I will have to
pay for the cable TV terminal, which is part of my apartment. The repeater
only supports unencrypted (DRM-free) TV, so I payed 86 Euros once instead
of 8 Euros per month for an encumbered pay TV product that I won’t use.
English Planet – Dreierlei | 14:08, Tuesday, 28 March 2017
Last night I used Big Google to look for information about Germans and I found it was thrilling to see how variable Google’s Autocomplete feature fills up in different languages if you ask: “Why are there so many Germans … ?”
As I understand, Google’s Autocomplete is using an algorithm that in particular take notes of former search inquiries and offers you the three to four mostly used completions of your sentence. If this is the case, you can see what prejudices seem to exist or are partly reflected in the Autocomplete. Try it yourself and do not write the whole question to the end, just stop after the three first letters of “Germans” in your language.
Here are results based on languages, I know:
Well, looks like the English-speaking people hold Germans in high regards. If you ask Google “Why are there so many ger” it will try to autocomplete with German composers, philosophers and scientists:
Spanish people on the other hand, seem to feel overcrowded or colonized by the Germans. When you type “Porque hay tantos ale” into Google, you see that people tend to ask why there are so many Germans in Mallorca, Chile, Argentina and Paraguay.
And the Germans themselves bring up their best Nazi-attitude. If you type “Warum gibt es so viele deu” into Google, Autocomplete proposes you to turn the question around and ask why are there so many Turkish people and foreigners in Germany.
If you like, let me know or leave a comment what Google’s Autocomplete comes up with in your language.
Monday, 27 March 2017
Paul Boddie's Free Software-related blog » English | 22:44, Monday, 27 March 2017
My last post raised the issue of funding Free Software, which I regard as an essential way of improving both the user and developer experiences around the use and creation of Free Software. In short, asking people to just volunteer more of their spare time, in order to satisfy the erroneous assumption that Free Software must be free of charge, is neither sustainable nor likely to grow the community around Free Software to which both users and developers belong.
In response, Eric Jones raised some pertinent themes: attitudes to paying for things; judgements about the perceived monetary value of those things; fund-raising skills and what might be called business or economic awareness; the potentially differing predicaments of infrastructure projects and user-facing projects. I gave a quick response in a comment of my own, but I also think that a more serious discussion could be had to try and find real solutions to the problem of sustainable funding for Free Software. I do not think that a long message to the FSFE discussion mailing list is the correct initial step, however, and so this article is meant to sketch out a few thoughts and suggestions as a prelude to a proper discussion.
On Paying for Things
It seems to me that people who can afford it do not generally have problems finding ways to spend their money. For instance, how many people spend tens of euros, dollars, pounds per month on mobile phone subscriptions with possibly-generous data allowances that they do not use? What about insurance, electricity, broadband Internet access, television/cable/streaming services, banking services? How often do we hear various self-appointed “consumer champions” berate people for overspending on such things? That people could shop around and save money. It is almost as if this is the primary purpose of the “developed world” human: to continually monitor the “competitive landscape” as each of its inhabitants adjusts its pricing and terms, nudging their captive revenues upward until such antics are spotted and punished with a rapid customer relationship change.
I may have noted before that many industries have moved to subscription models, presumably having noticed that once people have been parked in a customer relationship with a stable and not-particularly-onerous outgoing payment once a month, they will most likely stay in that relationship unless something severe enough occurs to overcome the effort of switching. Now, I do not advocate taking advantage of people in this way, but disregarding the matter of customer inertia and considering subscription models specifically, it is to be noted that these are used by various companies in the enterprise sector and have even been tried with individual “consumers”. For example, Linspire offered a subscription-based GNU/Linux distribution, and Eazel planned to offer subscription based access to online services providing, amongst other things, access to Free Software repositories.
(The subscription model can also be dubious in situations where things should probably be paid for according to actual usage, rather than some formula being cooked up to preserve a company’s balance sheet. For instance, is it right that electricity should be sold at a fixed per-month subscription cost? Would that product help uphold a company’s obligations to source renewable energy or would it give them some kind of implied permission to buy capacity generated from fossil fuels that has been dumped on the market? This kind of thing is happening now: it is not a mere thought exercise.)
Neither Linspire nor Eazel survived to the present day, the former being unable to resist the temptation to mix proprietary software into its offerings, thus alienating potential customers who might have considered a polished Debian-based distribution product three years before Ubuntu entered the scene. Eazel, meanwhile, burned through its own funding and closed up shop even before Linspire appeared, leaving the Nautilus file manager as its primary legacy. Both companies funded Free Software development, with Linspire funding independent projects, but obviously both companies decided internally how to direct funds to deserving projects, not really so unlike how businesses might normally fund Free Software.
On Infrastructure and User-Facing Software
Would individuals respond well to a one-off payment model for Free Software? The “app” model, with accompanying “store”, has clearly been considered as an option by certain companies and communities. Some have sought to provide such a store stocked with Free Software and, in the case of certain commercial services, proprietary applications. Others seek to provide their own particular software in existing stores. One hindrance to the adoption of such stores in the traditional Free Software realm is the availability of mature distribution channels for making software available, and such channels are typically freely accessible unless an enterprise software distribution is involved.
Since there seems to be an ongoing chorus of complaint about how “old” distribution software generally is, one might think that an opportunity might exist for more up-to-date applications to be delivered through some kind of store. Effort has gone into developing mechanisms for doing this separately from distributions’ own mechanisms, partly to prevent messing up the stable base system, partly to deliver newer versions of software that can be supported by the base system.
One thing that would bother me about such initiatives, even assuming that the source code for each “product” was easily obtained and rebuilt to run in the container-like environment inevitably involved, would be the longevity of the purchased items. Will the software keep working even after my base system has been upgraded? How much effort would I need to put in myself? Is this something else I have to keep track of?
For creators of Free Software, one concern would be whether it would even be attractive for people to obtain their software via such a store. As Eric notes, applications like The GIMP resemble the kind of discrete applications that people routinely pay money for, if marketed at them by proprietary software companies. But what about getting a more recent version of a mail server or a database system? Some might claim that such infrastructure things are the realm of the enterprise and that only companies would pay for this, and if individuals were interested, it would only be a small proportion of the wider GIMP-using audience.
(One might argue that the never-ending fixation on discrete applications always was a factor undermining the vision of component-based personal computing systems. Why deliver a reliable component that is reused by everyone else when you could bundle it up inside an “experience” and put your own brand name on it, just as everyone else will do if you make a component instead? The “app” marketplace now is merely this phenomenon taken to a more narcissistic extreme.)
On Sharing the Revenue
This brings us to how any collected funds would be shared out. Corporate donations are typically shared out according to corporate priorities, and communities probably do not get much of a say in the matter. In a per-unit store model, is it fair to channel all revenue from “sales” of The GIMP to that project or should some of that money be reserved for related projects, maybe those on which the application is built or maybe those that are not involved but which could offer complementary solutions? Should the project be expected to distribute the money to other projects if the storefront isn’t doing so itself? What kind of governance should be expected in order to avoid squabbles about people taking an unfair share of the money?
Existing funding platforms like Gratipay, which solicit ongoing contributions or donations as the means of funding, avoid such issues entirely by defining project-level entities which receive the money, and it is their responsibility to distribute it further. Maybe this works if the money is not meant to go any further than a small group of people with a well-defined entitlement to the incoming revenues, but with relationships between established operating system distributions and upstream projects sometimes becoming difficult, with distributions sometimes demanding an easier product to work with, and with upstream projects sometimes demanding that they not be asked to support those distributions’ users, the addition of money risks aggravating such tensions further despite having the potential to reduce or eliminate them if the sharing is done right.
On Worthy Projects
This is something about which I should know just a little. My own Free Software endeavours do not really seem to attract many users. It is not that they have attracted no other users besides myself – I have had communications over the years with some reasonably satisfied users – but my projects have not exactly been seen as meeting some urgent and widespread need.
(Having developed a common API for Python Web applications, and having delivered software to provide it, deliberately basing it on existing technologies, it was particularly galling to see that particular software labelled by some commentators as contributing to the technology proliferation problem it was meant to alleviate, but that is another story.)
What makes a worthy project? Does it have to have a huge audience or does it just need to do something useful? Who gets to define “useful”? And in a situation where someone might claim that someone else’s “important” project is a mere vanity exercise – doing something that others have already done, perhaps, or solving a need that “nobody really has” – who are we to believe if the developer seeks funding to further their work? Can people expect to see money for working on what they feel is important, or should others be telling them what they should be doing in exchange for support?
It is great that people are able to deliver Free Software as part of a business because their customers can see how that software addresses their needs. But what about the cases where potential customers are not able to see how some software might be useful, even essential, to them? Is it not part of the definition of marketing to help people realise that they need a product or solution? And in a world where people appear unaware of the risks accompanying proprietary software and services, might it not be part of our portfolio of activities to not only develop solutions before the intended audience has realised their importance, but also to try and communicate their importance to that audience?
Even essential projects have been neglected by those who use them to perform essential functions in their lives or in their businesses. It took a testimonial by Edward Snowden to get people to think how little funding GNU Privacy Guard was getting.
On Sources of Funding
Some might claim that there are already plenty of sources of funding for Free Software. Some companies sponsor or make awards to projects or developers – this even happened to me last year after someone graciously suggested my work on imip-agent as being worthy of such an award – and there are ongoing programmes to encourage Free Software participation. Meanwhile, organisations exist within the Free Software realm that receive income and spend some of that income on improving Free Software offerings and to cover the costs of providing those works to the broader public.
The Python Software Foundation is an interesting example. It solicits sponsorship from organisations and individuals whilst also receiving substantial revenue from the North American Python conference series, PyCon. It would be irresponsible to just sit on all of this money, of course, and so some of it is awarded as grants or donations towards community activities. This is where it gets somewhat more interesting from the perspective of funding software, as opposed to funding activities around the software. Looking at the PSF board resolutions, one can get an idea of what the money is spent on, and having considered this previously, I made a script that tries to identify each of the different areas of spending. This is somewhat tricky due to the free-format nature of the source data, but the numbers are something like the following for 2016 resolutions:
|Other software development||19398||6%|
|Conference software development||10440||3%|
Clearly, PyCon North America relies heavily on the conference management software that has been developed for it over the years – arguably, everyone and their dog wants to write their own conference software instead of improving existing offerings – but let us consider that expenditure a necessity for the PSF and PyCon. That leaves grants for other software development activities accounting for only 6% of the money made available for community-related activities, and one of those four grants was for translation of existing content, not software, into another language.
Now, the PSF’s mission statement arguably emphasises promotion over other matters, with “Python-related open source projects” and “Python-related research” appearing at the end of the list of objectives. Not having been involved in the grant approval process during my time as a PSF member, I cannot say what the obstacles are to getting access to funds to develop Python-related Free Software, but it seems to me that organisations like this particular one are not likely to be playing central roles in broader and more sustainable funding for Free Software projects.
On Bounties and Donations
Outside the normal boundaries of employment, there are a few other ways that have been suggested for getting paid to write Free Software. Donations are what fund organisations like the PSF, ostensibly representing one or more projects, but some individual developers solicit donations themselves, as do smaller projects that have not reached the stage of needing a foundation or other formal entity. More often than not, however, people are just doing this to “keep the lights on”, or rather, keep the hosting costs down for their Web site that has to serve up their software to everyone. Generally, such “tip jar” funding doesn’t really allow anyone to spend very much, if any, paid time on their projects.
At one point in time it seemed that bounties would provide a mechanism whereby users could help improve software projects by offering money if certain tasks were performed, features implemented, shortcomings remedied, and so on. Groups of users could pool their donations to make certain tasks more attractive for developers to work on, thus using good old-fashioned market forces to set rewards according to demand and to give developers an incentive to participate. At least in theory.
There are quite a few problems with bounty marketplaces. First of all, the pooled rewards for a task may not be at all realistic for the work requested. This is often the case on less popular marketplaces, and it used to be the case on virtually every marketplace, although one might say that more money is making it into the more well-known marketplaces now. Nevertheless, one ends up seeing bounties like this one for a new garbage collector for LuaJIT currently offering no reward at all, bearing the following comment from one discussion participant: “This really important for our production workload.” If that comment, particularly being made in such a venue, was made without any money accompanying it, then I don’t think we would need to look much further for a summarising expression of the industry-wide attitude problem around Free Software funding.
Meanwhile, where the cash is piling up, it doesn’t mean that anyone can actually take the bounty off the table. This bounty for a specific architecture port of LuaJIT attracted $5000 from IBM but still isn’t awarded, despite several people supposedly taking it on. And that brings up the next problem: who gets the reward? Unless there is some kind of coordination, and especially if a bounty is growing into something that might be a few months salary for someone, people will compete instead of collaborate to try and get their hands on the whole pot. But if a task is large and complicated, and particularly if the participants are speculating and not necessarily sufficiently knowledgeable or experienced in the technologies to complete it, such competition will just leave a trail of abandonment and very little to show for all the effort that has been made.
Admittedly, some projects appear to be making good use of such marketplaces. For example, the organisation behind elementary OS seems to have some traffic in terms of awarded bounties, although it might be said that they appear to involve smaller tasks and not some of the larger requests that appear to linger for years, never getting resolved. Of course, things can change on the outside without bounties ever getting reviewed for their continued relevance. How many Thunderbird bounties are still relevant now? Seven years is a long time even for Thunderbird.
Given a particular project with plenty of spare money coming in from somewhere, I suppose that someone who happens to be “in the zone” to do the work on that project could manage to queue up enough bounty claims to make some kind of living. But if that “somewhere” is a steady and sizeable source of revenue, and given the “gig economy” nature of the working relationship, one has to wonder whether anyone managing to get by in this way isn’t actually being short-changed by the people making the real money. And that leads us to the uncomfortable feeling that such marketplaces, while appearing to reward developers, are just a tool to drive costs down by promoting informal working arrangements and pitting people against each other for work that only needs to be paid for once, no matter how many times it was actually done in parallel.
Of course, bounties don’t really address the issue of where funds come from, although bounty marketplaces might offer ways of soliciting donations and encouraging sponsorship from companies. But one might argue that all they add is a degree of flexibility to the process of passing on donations that does not necessarily serve the interests of the participating developers. Maybe this is why the bounty marketplace featured in the above links, Bountysource, has shifted focus somewhat to compete with the likes of Gratipay and Patreon, seeking to emphasise ongoing funding instead of one-off rewards.
One significant development in funding mechanisms over the last few years has been the emergence of crowd-funding. Most people familiar with this would probably summarise it as someone promising something in the form of a campaign, asking for money to do that thing, getting the money if some agreed criteria were met, and then hopefully delivering that promised thing. As some people have unfortunately discovered, the last part can sometimes be a problem, resulting in continuing reputational damage for various crowd-funding platforms.
Such platforms have traditionally been popular for things like creative works, arts, crafts, literature, and so on. But hardware has become a popular funding area as people try their hand at designing devices, sourcing materials, going and getting the devices made, delivering the goods, and dealing with all the logistical challenges these things entail. Probably relatively few of these kinds of crowd-funding campaigns go completely according to plan; things like warranties and support may take a back seat (or not even be in the vehicle at all). The cynical might claim that crowd-funding is a way of people shirking their responsibilities as a manufacturer and doing things on the cheap.
But as far as software is concerned, crowd-funding might avoid the more common pitfalls experienced in other domains. Software developers would merely need to deliver software, not master other areas of expertise on a steep learning curve (sourcing, manufacturing, logistics), and they would benefit from being able to deliver the crucial element of the campaign using the same medium as that involved in getting everyone signed up in the first place: the Internet. So there wouldn’t be the same kind of customs and shipment issues that appear to plague just about every other kind of campaign.
I admit that I do not maintain a sufficient overview when it comes to software-related crowd-funding or, indeed, crowd-funding in general. The two major software campaigns I am aware of are Mailpile and Roundcube Next. Although development has been done on Roundcube Next, and Mailpile is certainly under active development, neither managed to deliver a product within the anticipated schedule. Software development costs are notoriously difficult to estimate, and it is very possible that neither project asked for enough money to pursue their goals with enough resources for timely completion.
One might say that it is no good complaining about things not getting done quickly enough and that people should pitch in and help out. On the one hand, I agree that since such products are Free Software and are available even in their unfinished state, we should take advantage of this to produce a result that satisfies everybody. On the other hand, we cannot keep returning to the “everybody pitch in” approach all the time, particularly when the circumstances provoking such a refrain have come about when people have purposefully tried to cultivate a different, more sustainable way of developing Free Software.
So there it is, a short article that went long again! Hopefully, it provides some useful thoughts about the limitations of existing funding approaches and some clues that might lead to better funding approaches in future.
vanitasvitae's blog » englisch | 17:10, Monday, 27 March 2017
Recently, the german “Bundesnetzagentur” (the German Federal Network Agency) contacted over 100 developers of XMPP (Jabber) clients in order to ask them to register their “services”. This is justified with section 6 of the German Telecommunications Act. Clients like eg. Xabber that are working on a server-client principle are considered a “service” and therefore have to be registered. That’s why Redsolution, the developers of Xabber received official mail, despite the fact that they are located in the Sowjet Union.
What does this mean? Will every developer of a chat client have to register in the future? How about the people that on the burden of running a free chat server? Also, does this also apply on computer games that include a chat functionality? What about the countless other ways to communicate over the internet that I forget?
Why would the Bundesnetzagentur do this? Have they simply not thought long enough about this, or do they simply not know better? What and how do they want to regulate? Is this the beginning of the end of the open XMPP protocol? What about other developers of eg. IRC, Slack or Matrix? Do they have to fear getting contacted too? Why are old people, that do not understand how the Internet works and what a client is allowed to regulate around? What can we do about this? And how can we raise awareness for the problematic of incompetent officials?
Similarly the “Kommission für Zulassung und Aufsicht” (ZAK, Commission for Authorization and Supervision) contacted the German Twitch and Youtube Stars “Piet Smiet”. In the opinion of the ZAK, everyone who is streaming and has more than 500 viewers is in need of a fee-based license. Such a license costs between 1.000 and 10.000 euros.
Again, why are German officials so ignorant and hellbent on destroying the simple and free world wide web? What steps can we take to stand up against unnecessary regulations?
Questions about questions…
Sunday, 26 March 2017
Evaggelos Balaskas - System Engineer | 18:55, Sunday, 26 March 2017
Working with VPS (Virtual Private Server), sometimes means that you dont have a lot of memory.
That’s why, we use the swap partition, a system partition that our linux kernel use as extended memory. It’s slow but necessary when your system needs more memory. Even if you dont have any free partition disk, you can use a swap file to add to your linux system.
Create the Swap File
[root@centos7] # dd if=/dev/zero of=/swapfile count=1000 bs=1MiB 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 3.62295 s, 289 MB/s [root@centos7] # du -sh /swapfile 1.0G /swapfile
That is 1G file
[root@centos7] # mkswap -L swapfs /swapfile Setting up swapspace version 1, size = 1048572 KiB LABEL=swapfs, UUID=d8af8f19-5578-4c8e-b2b1-3ff57edb71f9
[root@centos7] # chmod 0600 /swapfile
[root@centos7] # swapon /swapfile
# free total used free shared buff/cache available Mem: 1883716 1613952 79172 54612 190592 64668 Swap: 1023996 0 1023996
Now for the final step, we need to edit /etc/fstab
/swapfile swap swap defaults 0 0
Friday, 24 March 2017
free software - Bits of Freedom | 09:29, Friday, 24 March 2017
It's not about the destination, it's about the journey.
This famous inspirational quote is at the core of an article which Benjamin Mako Hill wrote nearly seven years ago where he argued that to build a truly free world, we will not be well served, technically, pragmatically, or ethically, by compromising on freedom of the tools we use.
While Benjamin spoke about the freedom of the tools used to build free and open source software, the more general question, which I asked in May 2016 is:
Is it legitimate to use proprietary software to further free and open source software?
Almost a year later, my answer is still: Yes, if that is indeed the purpose. When we go on a journey to get somewhere in life, and in society, we sometimes need to travel on unwanted paths, and proprietary software is certainly an unwanted path.
The problem with this is you sometimes get very comfortable on this unwanted path, especially if it offers you something more than the road you would otherwise take. But there's a caveat.
L'enfer est plein de bonnes volontés ou désirs (hell is full of good wishes or desires) - Saint Bernard of Clairvaux
More commonly, we tend to say that the road to hell is paved with good intentions. Using proprietary software, when the aim is freedom, is certainly a good intention, but it has the risk of backfiring.
On the other hand, anything we do carries a risk. For most anything we do, we weigh the risk of doing something against the advantages it may give us. In some cases, the advantages are so small that any risk isn't worth taking. In other cases, the advantages are significant, and a certain amount of risk is warranted.
We would love to have definite answers to the ethical questions in life, but ultimately, all we can say is: it depends. Your perspective is different than mine, and history will judge us not based on what roads we take, but by the impact we have on society.
And as a community, we should definitely consider the consequence of our actions, we should prefer free and open source software whenever possible, but we should also be aware of the impact we have on society, and make sure the road we're on is actually making an impact.
If it's not, we may need to try a different road. Even one with proprietary parts. It would be a risk, but when you have weighed and eliminated the alternatives, whatever road remains, must be the right one. For you. At that point in time.
Rekado | 09:00, Friday, 24 March 2017
Introducing the actors
For the past few years I have been working on GNU Guix, a functional package manager. One of the most obvious benefits of a functional package manager is that it allows you to install any number of variants of a software package into a separate environment, without polluting any other environment on your system. This feature makes a lot of sense in the context of scientific computing where it may be necessary to use different versions or variants of applications and libraries for different projects.
Many programming languages come with their own package management
facilities, that some users rely on despite their obvious limitations.
In the case of GNU R the built-in
makes it easy for users to quickly install packages from CRAN, and the
devtools package extends this mechanism to
install software from other sources, such as a git repository.
Unfortunately, limitations in how binaries are executed and
linked on GNU+Linux systems make it hard for people to continue to use
the package installation facilities of the language when also using R
from Guix on a distribution of the GNU system other than GuixSD.
Packages that are installed through
built on demand. Some of these packages provide bindings to other
libraries, which may be available at the system level. When these
bindings are built, R uses the compiler toolchain and the libraries the
system provides. All software in Guix, on the other hand, is
completely independent from any libraries the host system provides,
because that&aposs a direct consequence of implementing functional package
management. As a result, binaries from Guix do not have binary
compatibility with binaries built using system tools and linked with
system libraries. In other words: due to the lack of a shared ABI
between Guix binaries and system binaries, packages built with the
system toolchain and linked with non-Guix libraries cannot be loaded
into a process of a Guix binary (and vice versa).
Of course, this is not always a problem, because not all R packages provide bindings to other libraries; but the problem usually strikes with more complicated packages where using Guix makes a lot of sense as it covers the whole dependency graph.
Because of this nasty problem, which cannot be solved without a
redesign of compiler toolchains and file formats, I have been
recommending people to just use Guix for everything and avoid mixing
software installation methods. Guix comes with many R packages and
for those that it doesn&apost include it has an importer for the CRAN and
Bioconductor repositories, which makes it easy to create Guix package
expressions for R packages. While this is certainly valid advice, it
ignores the habits of long-time R users, who may be really attached to
There is another way; you can have your cake and eat it too. The
problem arises from using the incompatible libraries and toolchain
provided by the operating system. So let&aposs just not do this,
mmkay? As long as we can make R from Guix use libraries and the
compiler toolchain from Guix we should not have any of these
ABI problems when using
Let&aposs create an environment containing the current version of R,
the GCC toolchain, and the GNU Fortran compiler with Guix. We could
guix environment --ad-hoc here, but it&aposs better to use a
$ guix package -p /path/to/.guix-profile -i r gcc-toolchain gfortran
To "enter" the profile I recommend using a sub-shell like this:
$ bash $ source /path/to/.guix-profile/etc/profile $ ? $ exit
When inside the sub-shell we see that we use both the GCC toolchain and R from Guix:
$ which gcc $ /gnu/store/?-profile/bin/gcc $ which R $ /gnu/store/?-profile/bin/R
Note that this is a minimal profile; it contains the GCC toolchain with a linker that ensures that e.g. the GNU C library from Guix is used at link time. It does not actually contain any of the libraries you may need to build certain packages.
Take the R package "Cairo", which provides bindings to the Cairo rendering libraries as an example. Trying to build this in this new environment will fail, because the Cairo libraries are not found. To privide the required libraries we exit the environment, install the Guix packages providing the libraries and re-enter the environment.
$ exit $ guix package -p /path/to/.guix-profile -i cairo libxt $ bash $ source /path/to/.guix-profile/etc/profile $ R > install.packages("Cairo") ? * DONE (Cairo) > library(Cairo) >
Yay! This should work for any R package with bindings to any
libraries that are in Guix. For this particular case you could have
r-cairo package using Guix, of course.
Potential problems and potential solutions
What happens if the system provides the required header files and libraries? Will the GCC toolchain from Guix use them? Yes. But that&aposs okay, because it won&apost be able to compile and link the binaries anyway. When the files are provided by both Guix and the system the toolchain prefers the Guix stuff.
It is possible to prevent the R process and all its
children from ever seeing system libraries, but this requires the use
of containers, which are not available on somewhat older kernels that
are commonly used in scientific computing environments. Guix provides
support for containers, so if you use a modern Linux kernel on your
GNU system you can avoid some confusion by using either
environment --container or
guix container. Check out
the glorious manual.
Another problem is that the packages you build manually do not come with the benefits that Guix provides. This means, for example, that these packages won&apost be bit-reproducible. If you want bit-reproducible software environments: use Guix and don&apost look back.
- Don&apost mix Guix with system things to avoid ABI conflicts.
- If you use
install.packageslet R from Guix use the GCC toolchain and libraries from Guix.
- We do this by installing the toolchain and all libraries we need into a separate Guix profile. R runs inside of that environment.
If you want to learn more about GNU Guix I recommend taking a look at the excellent GNU Guix project page, which offers links to talks, papers, and the manual. Feel free to contact me if you want to learn more about packaging scientific software for Guix. It is not difficult and we all can benefit from joining efforts in adopting this usable, dependable, hackable, and liberating platform for scientific computing with free software.
The Guix community is very friendly, supportive, responsive and welcoming. I encourage you to visit the project?s IRC channel #guix on Freenode, where I go by the handle ?rekado?.
Thursday, 23 March 2017
Matthias Kirschner's Web log - fsfe | 06:56, Thursday, 23 March 2017
Open Whisper Systems is now offering its Signal secure messenger outside the Google play store. This is an important step to make Signal available for Free Software users. Unfortunately, while you do not need the the proprietary Google Play Services installed on your phone anymore, Signal still contains at least three proprietary libraries.
But if Signal is the only reason for you to have the proprietary Google Play installed, there is a way for you to get rid of that. Below I documented the steps required for installation without a Google account or Google Play.
First you need to download the Signal Android
apk on their website and install it. As I
have F-Droid installed as a system app, by default I disabled the installation
of apps from unknown sources for security reasons. So I first had to enable
Security -> Unknown sources.
As I did not find an easy way to check the SHA256 fingerprint before installation on the phone (if you know one, please let me know, else there are some tools on the desktop) for testing I first installed the Signal Android apk. Afterwards, in case you have F-Droid as a system app like myself, you should again disable installation of apps from unknown sources.
Before you proceed you should check the SHA256 fingerprint. The easiest way for that is to install Checkey from F-Droid (thanks to Torsten Grote for pointing that out). Now open Checkey, and search for "Signal". Compare the SHA256 checksum with the one mentioned on the Signal download page. If the fingerprints are the same, you can proceed to setup Signal on your phone. If they do not, do not do so as you might have a manipulated version of Signal.
Today I saw that the Android Signal apk is using its own updater, so you will get a notification if there is an update available. In that case, you should again first enable installation of apps from unknown sources, do the update, and then disable it again.
Hopefully there will be a solution in future to use Signal without a Google account which does not require to enable/disable installation of apps from unknown sources. A dedicated F-Droid repository for Signal could be such a solution.
Most importantly I hope in the future we will have fully reproducible Signal builds (the Java part is already reproducable), which are completely Free Software.
If you are interested in discussions about Free Software on Android, join FSFE's android mailing list.
Wednesday, 22 March 2017
Elena ``of Valhalla'' | 06:32, Wednesday, 22 March 2017XMPP VirtualHosts, SRV records and letsencrypt certificates
When I set up my XMPP server, a friend of mine asked if I was willing to have a virtualhost with his domain on my server, using the same address as the email.
Setting up prosody and the SRV record on the DNS was quite easy, but then we stumbled on the issue of certificates: of course we would like to use letsencrypt, but as far as we know that means that we would have to setup something custom so that the certificate gets renewed on his server and then sent to mine, and that looks more of a hassle than just him setting up his own prosody/ejabberd on his server.
So I was wondering: dear lazyweb, did any of you have the same issue and already came up with a solution that is easy to implement and trivial to maintain that we missed?
Henri Bergius | 00:00, Wednesday, 22 March 2017
Last fall we bought the Flowhub and other flow-based programming assets from The Grid, and now after some paperwork we’re up and running as Flowhub UG, a company registered in Berlin, Germany.
We’re also now selling the new Pro and Supporter plans, and can also provide a dedicated on-premise version of Flowhub. Please check out our Plans for more information:
Read more in the latest Kickstarter update.
Monday, 20 March 2017
Henri Bergius | 00:00, Monday, 20 March 2017
If you’re using RabbitMQ for distributing work to background dynos hosted by Heroku, GuvScale can monitor the queues for you and scale the number of workers up and down automatically.
This gives two big benefits:
- Consistent processing times by scaling dynos up to meet peak load
- Cost savings by reducing idle dynos. Don’t pay for computing capacity you don’t need
We originally built the guv tool back in 2015, and it has been used since by The Grid to manage their computationally intensive AI tasks. At The Grid we’ve had GuvScale make hundreds of thousands of scaling operations per month, running background dynos at more than 90% efficiency.
This has meant being able to produce sites at a consistent, predictable throughput regardless of how many users publish things at the same time, as well as not having to pay for idle cloud machines.
There are many frameworks for splitting computational loads out of your main web process and into background dynos. If you’re working with Ruby, you’ve probably heard of Sidekiq. For Python there is Celery. And there is our MsgFlo flow-based programming framework for a more polyglot approach.
If you’re already using one of these with RabbitMQ on Heroku (for example via the CloudAMQP service), you’re ready to start autoscaling with GuvScale!
First enable the GuvScale add-on:
$ heroku addons:create guvscale
Next you’ll need to create an OAuth token so that GuvScale can perform scaling operations for your app. Do this with the Heroku CLI tool. First install the authorization add-on:
$ heroku plugins:install heroku-cli-oauth
Then create a token:
$ heroku authorizations:create --description "GuvScale"
Copy the authentication token, and paste it to the GuvScale dashboard that you can access from your app’s Resources tab in Heroku Dashboard.
Here is an example for scaling a process that sends emails on the background:
emailsender: # Dyno role to scale queue: "send-email" # The AMQP queue name deadline: 180 # 3 minutes, in seconds minimum: 0 # Minimum number of workers maximum: 5 # Maximum number of workers concurrency: 10 # How many messages are processed concurrently processing: 0.300 # 300 ms, in seconds
Once GuvScale has been configured you can monitor its behavior in the dashboard.
Read more in the Heroku Dev Center GuvScale tutorial.
GuvScale is free during the public beta. Get started now!
/127.0.0.? /var/log/fsfe/flx » planet-en Albrechts Blog Alessandro at FSFE » English Alessandro's blog Alina Mierlus - Building the Freedom » English André on Free Software » English Being Fellow #952 of FSFE » English Bela's Internship Blog Bernhard's Blog Bits from the Basement Blog of Martin Husovec Blog » English Blog – Think. Innovation. Bobulate Brian Gough's Notes Carlo Piana :: Law is Freedom :: Ciarán's free software notes Colors of Noise - Entries tagged planetfsfe Communicating freely Computer Floss Daniel Martí's blog Daniel's FSFE blog DanielPocock.com - fsfe David Boddie - Updates (Full Articles) Don't Panic » English Planet ENOWITTYNAME Elena ``of Valhalla'' English Planet – Dreierlei English – Björn Schießle's Weblog English – Max's weblog English — mina86.com Escape to freedom Evaggelos Balaskas - System Engineer FLOSS – Creative Destruction & Me FSFE Fellowship Vienna » English FSFE interviews its Fellows Fellowship News Fellowship News » Page not found Florian Snows Blog » en Frederik Gladhorn (fregl) » FSFE Free Software & Digital Rights Noosphere Free Software with a Female touch Free Software – Free Software – Frank Karlitschek_ Free Software – GLOG Free Software – hesa's Weblog Free as LIBRE Free speech is better than free beer » English Free, Easy and Others From Out There Graeme's notes » Page not found Green Eggs and Ham Handhelds, Linux and Heroes Heiki "Repentinus" Ojasild » English HennR's FSFE blog Henri Bergius Hook’s Humble Homepage Hugo - FSFE planet Iain R. Learmonth Inductive Bias Jelle Hermsen » English Jens Lechtenbörger » English Karsten on Free Software Losca Marcus's Blog Mario Fux Martin's notes - English Matej's blog » FSFE Matthias Kirschner's Web log - fsfe Myriam's blog Mäh? Nice blog Nico Rikken » fsfe Nicolas Jean's FSFE blog » English Norbert Tretkowski PB's blog » en Paul Boddie's Free Software-related blog » English Pressreview Rekado Riccardo (ruphy) Iaconelli - blog Saint's Log Seravo TSDgeos' blog Tarin Gamberini Technology – Intuitionistically Uncertain The Girl Who Wasn't There » English The trunk Thib's Fellowship Blog » fsfe Thinking out loud » English Thomas Koch - free software Thomas Løcke Being Incoherent Told to blog - Entries tagged fsfe Tonnerre Lombard Torsten's FSFE blog » english Viktor's notes » English Vitaly Repin. Software engineer's blog Weblog Weblog Weblog Weblog Weblog Weblog Werner's own blurbs With/in the FSFE » English a fellowship ahead agger's Free Software blog anna.morris's blog ayers's blog bb's blog blog drdanzs blog » freesoftware egnun's blog » FreeSoftware emergency exit free software - Bits of Freedom free software blog freedom bits gollo's blog » English julia.e.klein's blog marc0s on Free Software mkesper's blog » English nikos.roussos - opensource pichel's blog polina's blog rieper|blog » en softmetz' anglophone Free Software blog stargrave's blog the_unconventional's blog » English things i made tobias_platen's blog tolld's blog vanitasvitae's blog » englisch wkossen's blog yahuxo's blog