Planet Fellowship (en)

Friday, 24 March 2017

Using proprietary software for freedom

free software - Bits of Freedom | 09:29, Friday, 24 March 2017

Using proprietary software for freedom

It's not about the destination, it's about the journey.

This famous inspirational quote is at the core of an article which Benjamin Mako Hill wrote nearly seven years ago where he argued that to build a truly free world, we will not be well served, technically, pragmatically, or ethically, by compromising on freedom of the tools we use.

While Benjamin spoke about the freedom of the tools used to build free and open source software, the more general question, which I asked in May 2016 is:

Is it legitimate to use proprietary software to further free and open source software?

Almost a year later, my answer is still: Yes, if that is indeed the purpose. When we go on a journey to get somewhere in life, and in society, we sometimes need to travel on unwanted paths, and proprietary software is certainly an unwanted path.

The problem with this is you sometimes get very comfortable on this unwanted path, especially if it offers you something more than the road you would otherwise take. But there's a caveat.

L'enfer est plein de bonnes volontés ou désirs (hell is full of good wishes or desires) - Saint Bernard of Clairvaux

More commonly, we tend to say that the road to hell is paved with good intentions. Using proprietary software, when the aim is freedom, is certainly a good intention, but it has the risk of backfiring.

On the other hand, anything we do carries a risk. For most anything we do, we weigh the risk of doing something against the advantages it may give us. In some cases, the advantages are so small that any risk isn't worth taking. In other cases, the advantages are significant, and a certain amount of risk is warranted.

We would love to have definite answers to the ethical questions in life, but ultimately, all we can say is: it depends. Your perspective is different than mine, and history will judge us not based on what roads we take, but by the impact we have on society.

And as a community, we should definitely consider the consequence of our actions, we should prefer free and open source software whenever possible, but we should also be aware of the impact we have on society, and make sure the road we're on is actually making an impact.

If it's not, we may need to try a different road. Even one with proprietary parts. It would be a risk, but when you have weighed and eliminated the alternatives, whatever road remains, must be the right one. For you. At that point in time.

Using R with Guix

Rekado | 09:00, Friday, 24 March 2017

Introducing the actors

For the past few years I have been working on GNU Guix, a functional package manager. One of the most obvious benefits of a functional package manager is that it allows you to install any number of variants of a software package into a separate environment, without polluting any other environment on your system. This feature makes a lot of sense in the context of scientific computing where it may be necessary to use different versions or variants of applications and libraries for different projects.

Many programming languages come with their own package management facilities, that some users rely on despite their obvious limitations. In the case of GNU R the built-in install.packages procedure makes it easy for users to quickly install packages from CRAN, and the third-party devtools package extends this mechanism to install software from other sources, such as a git repository.

ABI incompatibilities

Unfortunately, limitations in how binaries are executed and linked on GNU+Linux systems make it hard for people to continue to use the package installation facilities of the language when also using R from Guix on a distribution of the GNU system other than GuixSD. Packages that are installed through install.packages are built on demand. Some of these packages provide bindings to other libraries, which may be available at the system level. When these bindings are built R uses the compiler toolchain and the libraries the system provides. All software in Guix, on the other hand, is completely independent from any libraries the host system provides, because that&aposs a direct consequence of implementing functional package management. As a result, binaries from Guix do not have binary compatibility with binaries built using system tools and linked with system libraries. In other words: due to the lack of a shared ABI between Guix binaries and system binaries, packages built with the system toolchain and linked with non-Guix libraries cannot be loaded into a process of a Guix binary (and vice versa).

Of course, this is not always a problem, because not all R packages provide bindings to other libraries; but the problem usually strikes with more complicated packages where using Guix makes a lot of sense as it covers the whole dependency graph.

Because of this nasty problem, which cannot be solved without a redesign of compiler toolchains and file formats, I have been recommending people to just use Guix for everything and avoid mixing software installation methods. Guix comes with many R packages and for those that it doesn&apost include it has an importer for the CRAN and Bioconductor repositories, which makes it easy to create Guix package expressions for R packages. While this is certainly valid advice, it ignores the habits of long-time R users, who may be really attached to install.packages or devtools.

Schroedinger&aposs Cake

There is another way; you can have your cake and eat it too. The problem arises from using the incompatible libraries and toolchain provided by the operating system. So let&aposs just not do this, mmkay? As long as we can make R from Guix use libraries and the compiler toolchain from Guix we should not have any of these ABI problems when using install.packages.

Let&aposs create an environment containing the current version of R, the GCC toolchain, and the GNU Fortran compiler with Guix. We could use guix environment --ad-hoc here, but it&aposs better to use a persistent profile.

$ guix package -p /path/to/.guix-profile 
    -i r gcc-toolchain gfortran

To "enter" the profile I recommend using a sub-shell like this:

$ bash
$ source /path/to/.guix-profile/etc/profile
$ …
$ exit

When inside the sub-shell we see that we use both the GCC toolchain and R from Guix:

$ which gcc
$ /gnu/store/…-profile/bin/gcc
$ which R
$ /gnu/store/…-profile/bin/R

Note that this is a minimal profile; it contains the GCC toolchain with a linker that ensures that e.g. the GNU C library from Guix is used at link time. It does not actually contain any of the libraries you may need to build certain packages.

Take the R package "Cairo", which provides bindings to the Cairo rendering libraries as an example. Trying to build this in this new environment will fail, because the Cairo libraries are not found. To privide the required libraries we exit the environment, install the Guix packages providing the libraries and re-enter the environment.

$ exit
$ guix package -p /path/to/.guix-profile -i cairo libxt
$ bash
$ source /path/to/.guix-profile/etc/profile
$ R
> install.packages("Cairo")
…
 * DONE (Cairo)
> library(Cairo)
>

Yay! This should work for any R package with bindings to any libraries that are in Guix. For this particular case you could have installed the r-cairo package using Guix, of course.

Potential problems and potential solutions

What happens if the system provides the required header files and libraries? Will the GCC toolchain from Guix use them? Yes. But that&aposs okay, because it won&apost be able to compile and link the binaries anyway. When the files are provided by both Guix and the system the toolchain prefers the Guix stuff.

It is possible to prevent the R process and all its children from ever seeing system libraries, but this requires the use of containers, which are not available on somewhat older kernels that are commonly used in scientific computing environments. Guix provides support for containers, so if you use a modern Linux kernel on your GNU system you can avoid some confusion by using either guix environment --container or guix container. Check out the glorious manual.

Another problem is that the packages you build manually do not come with the benefits that Guix provides. This means, for example, that these packages won&apost be bit-reproducible. If you want bit-reproducible software environments: use Guix and don&apost look back.

Summary

  • Don&apost mix Guix with system things to avoid ABI conflicts.
  • If you use install.packages let R from Guix use the GCC toolchain and libraries from Guix.
  • We do this by installing the toolchain and all libraries we need into a separate Guix profile. R runs inside of that environment.

Lean more!

If you want to learn more about GNU Guix I recommend taking a look at the excellent GNU Guix project page, which offers links to talks, papers, and the manual. Feel free to contact me if you want to learn more about packaging scientific software for Guix. It is not difficult and we all can benefit from joining efforts in adopting this usable, dependable, hackable, and liberating platform for scientific computing with free software.

The Guix community is very friendly, supportive, responsive and welcoming. I encourage you to visit the project’s IRC channel #guix on Freenode, where I go by the handle “rekado”.

Thursday, 23 March 2017

Testing Signal without Google account

Matthias Kirschner's Web log - fsfe | 06:56, Thursday, 23 March 2017

Open Whisper Systems is now offering its Signal secure messenger outside the Google play store. This is an important step to make Signal available for Free Software users. Unfortunately, while you do not need the the proprietary Google Play Services installed on your phone anymore, Signal still contains at least three proprietary libraries.

But if Signal is the only reason for you to have the proprietary Google Play installed, there is a way for you to get rid of that. Below I documented the steps required for installation without a Google account or Google Play.

Signal Danger Zone

First you need to download the Signal Android apk on their website and install it. As I have F-Droid installed as a system app, by default I disabled the installation of apps from unknown sources for security reasons. So I first had to enable Security -> Unknown sources.

As I did not find an easy way to check the SHA256 fingerprint before installation on the phone (if you know one, please let me know, else there are some tools on the desktop) for testing I first installed the Signal Android apk. Afterwards, in case you have F-Droid as a system app like myself, you should again disable installation of apps from unknown sources.

Before you proceed you should check the SHA256 fingerprint. The easiest way for that is to install Checkey from F-Droid (thanks to Torsten Grote for pointing that out). Now open Checkey, and search for "Signal". Compare the SHA256 checksum with the one mentioned on the Signal download page. If the fingerprints are the same, you can proceed to setup Signal on your phone. If they do not, do not do so as you might have a manipulated version of Signal.

Today I saw that the Android Signal apk is using its own updater, so you will get a notification if there is an update available. In that case, you should again first enable installation of apps from unknown sources, do the update, and then disable it again.

Hopefully there will be a solution in future to use Signal without a Google account which does not require to enable/disable installation of apps from unknown sources. A dedicated F-Droid repository for Signal could be such a solution.

Most importantly I hope in the future we will have fully reproducible Signal builds (the Java part is already reproducable), which are completely Free Software.

If you are interested in discussions about Free Software on Android, join FSFE's android mailing list.

Wednesday, 22 March 2017

XMPP VirtualHosts, SRV records and letsencrypt certificates

Elena ``of Valhalla'' | 06:32, Wednesday, 22 March 2017

XMPP VirtualHosts, SRV records and letsencrypt certificates

When I set up my XMPP server, a friend of mine asked if I was willing to have a virtualhost with his domain on my server, using the same address as the email.

Setting up prosody and the SRV record on the DNS was quite easy, but then we stumbled on the issue of certificates: of course we would like to use letsencrypt, but as far as we know that means that we would have to setup something custom so that the certificate gets renewed on his server and then sent to mine, and that looks more of a hassle than just him setting up his own prosody/ejabberd on his server.

So I was wondering: dear lazyweb, did any of you have the same issue and already came up with a solution that is easy to implement and trivial to maintain that we missed?

Introducing Flowhub UG - the Flow-Based Programming company

Henri Bergius | 00:00, Wednesday, 22 March 2017

Flowhub — the product made possible by our successful NoFlo Kickstarter — has now its own company dedicated to supporting and improving the visual programming environment.

Last fall we bought the Flowhub and other flow-based programming assets from The Grid, and now after some paperwork we’re up and running as Flowhub UG, a company registered in Berlin, Germany.

We’re also now selling the new Pro and Supporter plans, and can also provide a dedicated on-premise version of Flowhub. Please check out our Plans for more information:

Flowhub Plans

Read more in the latest Kickstarter update.

Monday, 20 March 2017

GuvScale - Autoscaling for Heroku worker dynos

Henri Bergius | 00:00, Monday, 20 March 2017

I’m happy to announce that GuvScale — our service for autoscaling Heroku background worker dynos — is now available in a public beta.

If you’re using RabbitMQ for distributing work to background dynos hosted by Heroku, GuvScale can monitor the queues for you and scale the number of workers up and down automatically.

This gives two big benefits:

  • Consistent processing times by scaling dynos up to meet peak load
  • Cost savings by reducing idle dynos. Don’t pay for computing capacity you don’t need

GuvScale on Heroku Elements

We originally built the guv tool back in 2015, and it has been used since by The Grid to manage their computationally intensive AI tasks. At The Grid we’ve had GuvScale make hundreds of thousands of scaling operations per month, running background dynos at more than 90% efficiency.

This has meant being able to produce sites at a consistent, predictable throughput regardless of how many users publish things at the same time, as well as not having to pay for idle cloud machines.

Getting started

There are many frameworks for splitting computational loads out of your main web process and into background dynos. If you’re working with Ruby, you’ve probably heard of Sidekiq. For Python there is Celery. And there is our MsgFlo flow-based programming framework for a more polyglot approach.

If you’re already using one of these with RabbitMQ on Heroku (for example via the CloudAMQP service), you’re ready to start autoscaling with GuvScale!

First enable the GuvScale add-on:

$ heroku addons:create guvscale

Next you’ll need to create an OAuth token so that GuvScale can perform scaling operations for your app. Do this with the Heroku CLI tool. First install the authorization add-on:

$ heroku plugins:install heroku-cli-oauth

Then create a token:

$ heroku authorizations:create --description "GuvScale"

Copy the authentication token, and paste it to the GuvScale dashboard that you can access from your app’s Resources tab in Heroku Dashboard.

Once GuvScale has an OAuth token, it is ready to do scaling for you. You’ll have to provide some scaling rules, either in the GuvScale dashboard, or via the heroku-guvscale CLI tool.

Here is an example for scaling a process that sends emails on the background:

emailsender:          # Dyno role to scale
  queue: "send-email" # The AMQP queue name
  deadline: 180       # 3 minutes, in seconds
  minimum: 0          # Minimum number of workers
  maximum: 5          # Maximum number of workers
  concurrency: 10     # How many messages are processed concurrently
  processing: 0.300   # 300 ms, in seconds

Once GuvScale has been configured you can monitor its behavior in the dashboard.

Workload and scaling operations

Read more in the Heroku Dev Center GuvScale tutorial.

GuvScale is free during the public beta. Get started now!

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Sunday, 19 March 2017

The illustrated children's guide to Kubernetes

Norbert Tretkowski | 07:14, Sunday, 19 March 2017

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/Q4W8Z-D-gcQ" width="560"></iframe>

Friday, 17 March 2017

Making Free Software Work for Everybody

Paul Boddie's Free Software-related blog » English | 21:56, Friday, 17 March 2017

Another week and another perfect storm of articles and opinions. This time, we start with Jonas Öberg’s “How Free Software is Failing the Users“, where he notes that users don’t always get the opportunity to exercise their rights to improve Free Software. I agree with many of the things Jonas says, but he omits an important factor that is perhaps worth thinking about when reading some of the other articles. Maybe he will return to it in a later article, but I will discuss it here.

Let us consider the other articles. Alanna Irving of Open Collective wrote an interview with Jason Miller about project “maintainer burnout” entitled “Preact: Shattering the Perception that Open Source Must be Free“. It’s worth noting here that Open Collective appears to be a venture capital funded platform with similar goals to the more community-led Gratipay and Liberapay, which are funding platforms that enable people to get others to fund them to do ongoing work. Nolan Lawson of Microsoft describes the demands of volunteer-driven “open source” in “What it feels like to be an open-source maintainer“. In “Life of free software project“, Michal Čihař writes about his own experiences maintaining projects and trying to attract contributions and funding.

When reading about “open source”, one encounters some common themes over and over again: that Free Software (which is almost always referenced as “open source” when these themes are raised) must be free as in cost, and that people volunteer to work on such software in their own time or without any financial reward, often for fun or for the technical challenge. Of course, Free Software has never been about the cost. It probably doesn’t help that the word “free” can communicate the meaning of zero cost, but “free as in freedom” usually gets articulated very early on in any explanation of the concept of Free Software to newcomers.

Even “open source” isn’t about the cost, either. But the “open source” movement started out by differentiating itself from the Free Software movement by advocating the efficiency of producing Free Software instead of emphasising the matter of personal control over such software and the freedom it gives the users of such software. Indeed, the Open Source Initiative tells us this in its mission description:

Open source enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is higher quality, better reliability, greater flexibility, lower cost, and an end to predatory vendor lock-in.

It makes Free Software – well, “open source” – sound like a great way of realising business efficiencies rather than being an ethical choice. And with this comes the notion that when it comes to developing software, a brigade of pixies on the Internet will happily work hard to deliver a quality product that can be acquired for free, thus saving businesses money on software licence and development costs.

Thus, everybody now has to work against this perception of no-cost, made-by-magic Free Software. Jonas writes, “I know how to bake bread, but oftentimes I choose to buy bread instead.” Unfortunately, thanks to the idea that the pixies will always be on hand to fix our computers or to make new things, we now have the equivalent of bakers being asked to bake bread for nothing. (Let us ignore the specifics of the analogy here: in some markets it isn’t exactly lucrative to run a bakery, either.)

Jason Miller makes some reasonable observations as he tries to “shatter” this perception. Sadly, as it seems to be with all these funding platforms, there is some way to go. With perhaps one or two exceptions, even the most generously supported projects appear to be drawing a fraction of a single salary as donations or contributions, and it would seem that things like meet-ups and hackerspaces attract funding more readily. I guess that when there are tangible expenses – rental costs, consumables, power and network bills – people are happy to pay such externally-imposed costs. When it comes to valuing the work done by someone, even if one can quote “market rates” and document that person’s hours, everyone can argue about whether it was “really worth that amount”.

Michal Čihař notes…

But the most important thing is to persuade people and companies to give back. You know there are lot of companies relying on your project, but how to make them fund the project? I really don’t know, I still struggle with this as I don’t want to be too pushy in asking for money, but I’d really like to see them to give back.

Sadly, we live in an age of free stuff. If it looks like a project is stalling because of a lack of investment, many people and businesses will look elsewhere instead of stepping up and contributing. Indeed, this is when you see those people and businesses approaching the developers of other projects, telling those developers that they really want to use their project but it perhaps isn’t yet “good enough” for “the enterprise” or for “professional use” and maybe if it only did this and this, then they would use it, and wouldn’t that give it the credibility it clearly didn’t have before? (Even if there are lots of satisfied existing users and that this supposed absence of credibility purely exists in the minds of those shopping around for something else to use.) Oh, and crucially, how about doing the work to make it “good enough” for us for nothing? Thank you very much.

It is in this way that independent Free Software projects are kept marginalised, remaining viable enough to survive (mostly thanks to volunteer effort) but weakened by being continually discarded in favour of something else as soon as a better “deal” can be made and another group of pixies exploited. Such projects are thereby ill-equipped to deal with aggressive proprietary competitors. When fancy features are paraded by proprietary software vendors in front of decision-makers in organisations that should be choosing Free Software, advocates of free and open solutions may struggle to persuade those decision-makers that Free Software solutions can step in and do what they need.

Playing projects against each other to see which pixies will work the hardest, making developers indulge in competitions to see who can license their code the most permissively (to “reach more people”, you understand), portraying Free Software development as some kind of way of showcasing developers’ skills to potential employers (while really just making them unpaid interns on an indefinite basis) are all examples of the opportunistic underinvestment in Free Software which ultimately just creates opportunities for proprietary software. And it also goes a long way to undermining the viability of the profession in an era when we apparently don’t have enough programmers.

So that was a long rant about the plight of developers, but what does this have to do with the users? Well, first of all, users need to realise that the things they use do not cost nothing to make. Of course, in this age of free stuff (as in stuff that costs no money), they can decide that some program or service just doesn’t “do it for them” any more and switch to a shinier, better thing, but that isn’t made of pixie dust either. All of the free stuff has other hidden costs in terms of diminished privacy, increased surveillance and tracking, dubious data security, possible misuse of their property, and the discovery that certain things that they appear to own weren’t really their property all along.

Users do need to be able to engage with Free Software projects within the conventions of those projects, of course. But they also need the option of saying, “I think this could be better in this regard, but I am not the one to improve it.” And that may need the accompanying option: “Here is some money to pay someone to do it.” Free Software was always about giving control to the users but not necessarily demanding that the users become developers (or even technical writers, designers, artists, and so on). A user benefits from their office suite or drawing application being Free Software not only in situations where the user has the technical knowledge to apply it to the software, but they also benefit when they can hand the software to someone else and get them to fix or improve it instead. And, yes, that may well involve money changing hands.

Those of us who talk about Free Software and not “open source” don’t need reminding that it is about freedom and not about things being free of charge. But the users, whether they are individuals or organisations, end-users or other developers, may need reminding that you never really get something for nothing, especially when that particular something is actually a rather expensive thing to produce. Giving them the opportunity to cover some of that expense, and not just at “tip jar” levels, might actually help make Free Software work, not just for users, not just for developers, but as a consequence of empowering both groups, for everybody.

Policy, why's that?

free software - Bits of Freedom | 10:13, Friday, 17 March 2017

Policy, why's that?

In November 2016, Bruce Schneier wrote about the need for increased regulation in the software sphere.

If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the "Internet of Things" and increased regulation of what are now critical and life-threatening technologies. It's no longer a question of if, it's a question of when. - Bruce Schneier

I may have my qualms about the Internet of Things, but it's impossible to shy away from the fact that where software development has been relatively free of government interventions and regulations, the increased reliance of software makes it impossible for governments not to act. They will act in the interest of security and critical societal functions, for consumer protection, to maintain market competition, or any other good and valid reasons.

The software community by and large has been reluctant to embrace any sort of regulation of itself, and the free and open source software community is no less different. Warranties and liabilities are routinely disclaimed and limited with the understanding that if someone is liable for the code they write, they would be much less willing to write it and even less willing to license it openly.

That may be true, but as Schneier puts it: it's no longer a question of it, it's a question of when. As free and open source software gets into health care, automotive and other safety critical areas, it's getting further into areas of software regulation, and those areas will only expand over time.

To meet this need for regulation, we must ensure regulatory agencies and governments are getting the support and knowledge they need to create good policies and regulation, rather than bad ones.

Advertisement: This is something the FSFE is already doing, but we could use your support to do more.

Our policy work need to cover two critical areas: we need to work more with governments and local municipalities to encourage uptake of free and open source software friendly policies in procurement and development of IT systems. There are limited reason for any government function today to rely on proprietary software for their core functions. When developing or procuring new systems, governments should make free and open source software the default, and install the necessary oversight to ensure this happens.

But this doesn't respond to the increased regulation in the software space. We also need to work with governments and regulatory agencies across the board to make sure that when and as they consider regulation of IT, free and open source software is considered and our ability to keep supplying and developing free and open source software is guaranteed.

We need to do this in any area which can be touched by regulation. No stone is too small to turn over: bank and financial regulation, food safety and security, occupational safety, environmental protection, telecommunication, automotive regulation, and so on and so forth.

Regulation can sometimes have unintended consequences, especially when it comes from areas where we did not expect it.

Thursday, 16 March 2017

Bosch Connected Experience: Eclipse Hono and MsgFlo

Henri Bergius | 00:00, Thursday, 16 March 2017

I’ve been attending the Bosch Connected Experience IoT hackathon this week at Station Berlin. Bosch brought a lot of different devices to the event, all connected to send telemetry to Eclipse Hono. To make them more discoverable, and enable rapid prototyping I decided to expose them all to Flowhub via the MsgFlo distributed FBP runtime.

The result is msgflo-hono, a tool that discovers devices from the Hono backend and exposes them as foreign participants in a MsgFlo network.

BCX Open Hack

This means that when you connect Flowhub to your MsgFlo coordinator, you have all connected devices appear there, with port for each sensor they expose. And since this is MsgFlo, you can easily pipe their telemetry data to any Node.js, Python, Rust, or other program.

Hackathon project

Since this is a hackathon, there is a competition on projects make in this event. To make the Hono-to-MsgFlo connectivity, and Flowhub visual programming capabilities more demoable, I ended up hacking together a quick example project — a Bosch XDK controlled air theremin.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/ziQmFjXYE3c" width="560"></iframe>

This comes in three parts. First of all, we have the XDK exposed as a MsgFlo participant, and connected to a NoFlo graph running on Node.js

Hono telemetry on MsgFlo

The NoFlo graph starts a web server and forwards the telemetry data to a WebSocket client.

NoFlo websocket server

Then we have a forked version of Vilson’s webaudio theremin that uses the telemetry received via WebSockets to make sound.

NoFlo air theremin

The whole setup seems to work pretty well. The XDK is connected to WiFi here, transmits its telemetry to a Hono instance running on AWS. This data gets forwarded to the MsgFlo MQTT network, and from there via WebSocket to a browser. And all of these steps can be debugged and experimented with in a visual way.

Relevant links:

Update: we won the Open Hack Challenge award for technical brilliance with this project.

BCX entrance

Tuesday, 14 March 2017

Building Greenland’s new data infrastructure as free software

agger's Free Software blog | 10:50, Tuesday, 14 March 2017

My company Magenta ApS is currently developing a data distributer infrastructure to handle all public data in Greenland and specifically to ensure their distribution between local authorities and the central government. I’m not personally involved in the development (though I might be at a later point, depending on the project’s needs), but I helped estimating and writing the bid. The data distributor must meet some quite high security and perfomance standards and will, as required by law, store data bitemporally according to the Danish standards for public data. As Greenland is a country of 2 million km² with a population of only 56,000, the system will be geographically quite distributed, and connectivity can be a problem, which challenges the system may also be able to handle.

The government of Greenland did not have a requirement that their new data infrastructure should be free software, but Magenta always delivers software under a free license, and we won the bid. The software will run on Microsoft Windows, since GNU/Linux skills can’t be reliably found on Greenland yet; it will be coded in a platform-agnostic way, using Java and Python/Django, so it could be switched to a GNU//Linux system at a later point, either to the government of Greenland or to possible new customers for this infrastructure.

As described on the EU’s free software observatory:

Next open source based, generation Public Records system for Greenlandic Agency for Digitisation

The government of Greenland wants to overhaul its current Grunddata (public records) system. According to the country’s digitalisation agency, one of the aims is to make it easier to share data between public administrations, businesses and citizens.

The modernisation should also increase public sector efficiency, by streamlining processes, deduplicating entries. The new system should also help to avoid requests for data that is already present in the public administration systems.

The new system is to provide high quality data, while passing on savings, and creating opportunities for growth and innovation, the Greenlandic Agency for Digitisation writes.

For Magenta, this is one of the largest orders in our history, and creating a new data infrastructure for an entire country as free software is an important opportunity – and responsibility. We’re looking forward to deliver this in order and hopefully keep working with the government of Greenland for years to come.

Friday, 10 March 2017

How Free Software is Failing the Users

free software - Bits of Freedom | 11:28, Friday, 10 March 2017

How Free Software is Failing the Users

A few months ago, I wrote in essence that I'm a user of the Linux kernel, and I don't want to fix my shitty broken computer. I want my vendor to work with the Linux developer community to get fixes into the main stream kernel, so I don't have to fix my shitty broken computer.

I was reminded of this as I was reading Adam Hyde's book The Cabbage Tree Method. In his book, Adam makes the claim that free and open source is broken -- it works for developers, but has failed its users. When a developer has a problem, they have the necessary skills to fix that problem. Users don't have.

While we often claim that everyone can be involved in free and open source software without needing to code (and many are, in various roles), we've all too often fallen into the thinking that in order to contribute, you must possess the skill to develop software.

If you listen to some of the public discourse at the moment, well outside of our own bubble, there's a feeling in society that everyone needs to learn how to program a computer! Computer programming is introduced at ever younger ages in, and out, of school. Hacker dads and moms are putting on Coder Dojos to teach children about programming. Raspberry Pi's are all around us and in many children's rooms.

It's easy to think the future is one where programming has developed in a way so that everyone knows how to program, and everyone can contribute to fixing the software which runs our society. We can certainly make programming easier. And there's definitely a value in learning programming for everyone. But it doesn't mean everyone should be a programmer. It doesn't mean everyone should or will contribute to fixing our software.

When I wrote that I don't want to fix my shitty broken computer, I've actively made the decision I'm not going to do the work to fix it. I know how to code, that's not the problem. I just don't think that I should be the one doing it.

I make decisions like that every day, as I'm sure you do. I know how to bake bread, but oftentimes I choose to buy bread instead. I usually know how to repair my car if it breaks, but I often choose not to do it myself. I know how to fix the software on my computer, but I choose not to.

The freedoms which free and open source software gives me are still relevant, even if I don't exercise them, because they are important for someone else. And they're important for the company from whom I get the hardware and software I use. But they're only indirectly relevant to me.

So for as long as the only contribution one can make to free and open source software is to contribute with the ability to code, free and open source software will fail its users. I said before that there are plenty of people contributing skills to free and open source software which is not coding. That's true: there are designers, translators, writers, and a lot more.

But those tasks don't help me, as a user, get my problem fixed. I can't fix a broken device driver by writing better documentation or designing new icons for the software.

What we need are ways in which those of us who choose not to code can contribute to getting the bugs we encounter fixed. This requires a lot of changes, from the tools we use to interact (which are too technical) to the way we communicate about free and open source software contributions (where we tend to emphasize code contributions).

If we don't find a way to include even people like me in the process of fixing a bug, then free and open source software will remain a tool for developers only. If we want to make sure free and open source software gives freedom to the end user, we need to give the end user a chance to use their freedoms and be included in the work.

If we don't, if we don't manage to include users in the work of developing software, then I fear the freedoms we want for developers will never reach the users.

FSFE Information booth at Veganmania indoor festival 2017

FSFE Fellowship Vienna » English | 10:11, Friday, 10 March 2017

Gespräche am Infostand
Laufend intensive beratungsgespräche am Infostand
3 Tage lang Dauereinsatz am Infostand

On the weekend of 24 to 26 February, the first indoor Veganmania festival took place at Wiener Stadthalle in Vienna. At the first celebration of the 20th anniversary of this yearly event (two more celebrations to come) we once again had a very successful booth. And it was even bigger than in the years before.

Our booth was placed directly opposite the entrance. Our neighbours where the vegan running team on the left and VGT, the most active animal rights/animal welfare organisation in Austria, on the right. Therefore we couldn’t have asked for a better spot. A further advantage was the fact that this event was indoors. This way we could use our roll-up and many posters we normally can’t use when we do booths outside.

Confirming our experience from previous years, we can only state once more that the Veganmania festivals are exceptionally good events for having FSFE booths. During the three days we only had rather short breaks from talking to people to take photos or to get something from the many stalls with delicious vegan food. On Friday we started at about 3pm and carried on until 10pm. On Saturday, the longest day, we started at about 10am and went round the clock to 10pm. Sunday was a little shorter again since we started at 10am but had packed up by about 6pm.

Since in past years the Veganmania always took place in a prominent shopping street we thought that most of our visitors where just there by chance. But this year showed that this is not the case as we were in a concert venue with no shoppers passing by. It can’t be denied that Veganmanias really do attract thousands of people.

As usual we had plenty of opportunities to introduce the concept of free software to people who hadn’t come across it before, and many showed serious interest. Our red I love free software baloons made kids happy until we didn’t have any more left. Some people even decided to set up their home computers with free software a few days later. Of course we where happy to assist them with any questions or help they asked for.

Over the years we have become a well trained team of volunteers on our stall. For some strange reason we all seem to share the odd trait of not actually wanting to leave the stall for breaks. Therefore, most of the time we had at least 2, but often even 3 activists at once who where engaged in discussions with all kinds of people. I am very happy that Gregor and Martin supported me again. Without Gregor’s bicycle trailer it would have been hard to get all material to the venue. Martin is an absolute treasure as I hardly gave him any warning. I called him the day of the event and he joined us almost immediately, not leaving the stall (during opening hours) until Sunday evening. Martin actually is someone I first met at our Veganmania information stall a few years ago. Since then he has become a very reliable, competent and always friendly backbone of our public outreach work. I can’t thank my colleagues enough for their patience and commitment.

We were able to convince local activists from other organisations to bring us some leaflets to complete our rich assortment of information materials. We had stickers from the local Chaos Computer Club group and from epicenter.works, a very productive data protection association. On Friday I realised that we could really do with some additional leaflets that we had in the past but which had run out. So I updated these leaflets in the evening and went to a print shop on Saturday morning in order to have them for the rest of the weekend. Unfortunately we didn’t get around to printing an other batch of our well received free games folder in time. But we will organize this soon. In time for the local Linux week’s events and two further Veganmanias.

More

A few days later I realised that the local group System Change not Climate Change offers workshops called Skills 4 change. These include teaching people about Scribus, Inkscape and the GIMP. I went there and offered to do joint workshops going into more detail. Their original workshop didn’t include much practical training due to the fact that they worked on theoretical design know-how at the beginning, giving introductions for all three programs on just one afternoon. I hope we will be able to do proper workshops with practical examples soon.

This is a good addition to the basic computer security workshops I’ve held at the VGT, mostly explaining why free software is the only way to go when aiming for trustworthy computer systems.

asCallback: embedding NoFlo graphs in JavaScript programs

Henri Bergius | 00:00, Friday, 10 March 2017

It has always been easy to wrap existing JavaScript code into NoFlo graphs — just write a component that exposes its functionality via some ports.

Going the other way and exposing a NoFlo graph to JS land was also possible but cumbersome. With NoFlo 0.8.3 we now made it super easy with the new asCallback API.

Let’s say you have a graph (or just a component) with the typical IN, OUT and ERROR ports, you can wrap it like this:

// Load the NoFlo module
var noflo = require('noflo');

// Use NoFlo's asCallback helper to prepare a JS function that wraps the graph
var wrappedGraph = noflo.asCallback('myproject/MyComponent', {
  // Provide the project base directory where NoFlo seeks graphs and components
  baseDir: '/path/to/my/project'
});

// Call the wrapped graph. Can be done multiple times
wrappedGraph({
  // Provide data to be sent to inports
  in: 'foo'
}, function(err, result) {
  // If component sent to its error port, then we'll have err
  if (err) { throw err; }
  // Do something with the results
  console.log(result.out);
});

If the graph has multiple inports, you can provide each of them a value in that input object. Similarly, the results sent to each outport will be in the result object. If a port sent multiple packets, their values will be in an array.

Use cases

With asCallback, you can implement parts of your system as NoFlo graphs without having to jump in all the way. Even with a normal Node.js or client-side JS application it is easy to see places where NoFlo fits in nicely:

  • Making complex Express.js or Redux middleware chains manageable
  • Adding customizable workflows to some part of the system
  • Implementing Extract, Transform, Load (ETL) pipelines
  • Porting a system into NoFlo piece-by-piece
  • Exposing a pure-JS API for an NPM module built in NoFlo

It also makes it easier to test NoFlo graphs and components using standard, non-FBP-aware testing tools like Mocha.

Network lifecycle

Each invocation of a callbackified NoFlo graph creates its own Network instance. The function collects result packet until network finishes, and then calls its callback.

Since these networks are isolated, you can call the function as many times as you like without any risk of packet collisions.

Promises

Since the callbackified NoFlo graph looks like a typical Node.js function, you can use all the normal flow control mechanisms like Promises or async with it.

For example, you can convert it to a Promise either manually or with a library like Bluebird:

// Load Bluebird
var Promise = require('bluebird');

// Convert the wrapped function into a Promise
var promisedGraph = Promise.promisify(wrappedGraph);

// Run it
promisedGraph({
  in: 'baz'
})
.then (function (result) {
  console.log(result.out);
});

This functionality is available right now on NPM.

Tuesday, 07 March 2017

Fix Realtek HD Audio Issues on Windows 10

Marcus's Blog | 18:02, Tuesday, 07 March 2017

Hell, it has been a heck of a time since I last blogged about Windows (if I ever did), but this one pissed me of a lot, and maybe it helps others.

After upgrading my X1 Carbon (1st Gen) from Window 7 to 10, the sound was a real mess. It was overtuned and all that comes out of the internal speakers was Crackling noise. I tried with the drives from Windows that have been applied automatically and also switched to the official Lenovo drivers. Even the latest upstream Realtek driver did not fix the problem.


After searching a lot through the wide open spaces of this thing called Internet, I finally found this Support Page from Microsoft.

In the Check Device Manager section it states that one has to change the driver to High Definition Audio Device. And it’s exactly called like that. There are other drivers available that sound similiar but won’t work.

So finally I am now again able to enjoy the whole content of the Web a bit more 😉

Okular table selection mode is amazing

TSDgeos' blog | 19:17, Tuesday, 07 March 2017

I case you didn't know ;)


Okular has a amazing table select mode where you select an area and Okular will auto detect rows and columns on it (you can fine-tune it afterwards) and then you can directly copy&paste to a spreadsheet :)

It's mostly tested on PDF files, but should work the same on any of the formats we support text extraction.

Update: This feature is not new, just i got to use it today ;) Video at https://www.youtube.com/watch?v=E8XWI06tltY

Monday, 06 March 2017

Recent Atlas Improvements

Iain R. Learmonth | 14:22, Monday, 06 March 2017

This post was originally posted to the Tor Project blog. If you would like to comment on this post, please do so there.


Atlas is a web application to learn about currently running Tor relays and bridges. You can search by fingerprint, nickname, country, flags and contact information and be returned information about its advertised bandwidth, uptime, exit policies and more.

I’m taking this opportunity to introduce myself. I’m Iain R. Learmonth, or just irl on IRC. I began contributing to Atlas in June last year, and I’m currently serving as the maintainer for Atlas. We have made some usability improvements to Atlas recently that we are happy to share with you today.

Thanks to the work of Raphael and anonymous contributors for their help in producing patches. We will continue to work through the open tickets, and if you have a feature you would like to see or spot something not working quite correctly, please do feel free to open a ticket for that. If you would like to contribute to fixing some of our existing tickets, we have a new guide for contributing to Atlas.

Improved Error Handling:

  • Added a new message to warn users when the Onionoo backend is unavailable [#18081]
  • Added a new message for the case where Onionoo is serving outdated data [#20374]
  • No longer attempts to display AS or geolocation information when it's not available [#18989]

UX Improvements:

  • Added tooltips to give descriptions of the meaning for flags [#9913]
  • Made it easy to distinguish between "alleged" and "effective" family [#20382]
  • Removed the graphs for which the data backend will never have any data [#19553]
  • Graphs that have no data, but which may have data in the future, now give a "No Data Available" message [#21430]
  • Relay and bridge fingerprints will now wrap when on smaller screens [#12685]
  • Tooltips are repositioned to avoid them being clipped off on smaller displays [#21398]

Standards Compliance:

  • Now HTML 5 compliant according to the W3C Validator (including generated HTML) [#21274]

Thursday, 02 March 2017

Unbox your Dropbox? Forget about TransIP Stack (for now)

Blog – Think. Innovation. | 14:27, Thursday, 02 March 2017

Do you want to get rid of privacy-invading centralized Silicon Valley Cloud services like Dropbox, Google Calendar and Google Documents? Me too! However, that is easier said than done.

The functionality and user experience of them is so great and replacing them for something privacy-respecting, decentralised and preferably Open Source is no easy task.

And in my case, I use Dropbox and Google very intensively for both my work and private life, so I want a smooth transition that is guaranteed to be working and not lose or corrupt my data.

Over the past years the result is that I did discuss the dilemma with some people and always had the idea of De-Googlifying and Un-Dropboxing in the back of my mind, but never really found the time and had the guts to take the plunge.

But recently some people directed me towards “Stack”: a file synchronisation cloud service by the successful hosting provider TransIP. I checked out their website and offer and got excited: this looked solid, professional, easy and a big plus for me: done by a Dutch company. And the free 1 TB they give, helps as well! 😉

So over the past few weeks I gave Stack a try and communicated frequently with the TransIP helpdesk. My Dropbox use case: I currently have a little over 200 GB in stored files in Dropbox, containing both private files like holiday pictures and work files like project documents, contracts and administration.

Pretty much all the digital files of my life are in there. And the convenience of always having access to them on 1 place on my laptop, via the web and on my smartphone is a great benefit.

To put a long story short: forget about Stack for now as a real replacement for Dropbox.

Summarizing, the reasons are:

1. The Linux Client app is regularly unresponsive (resulting in the Wait-Quit window) and takes 100% CPU power, resulting in a lagging laptop. The Dropbox app never gives these problems.

2. Stack can appearantly not sync all files, and a few days of uploading resulted in several dozen notifications saying a file could not be uploaded, indicating that the details can be found in the log. Dropbox does not complain about file syncing, it just does it.

3. It is not possible to recover which files were not synced and fix that file by file: the log containing the non-synced files actually only exists as the Sync tab in the Client app. As soon as that app closes (see 1), the information is lost.

At 2 I should add that the helpdesk helped out, explaining how hidden file synching is off by default and how I can turn that on and that the complete path and file name should not exceed 148 characters. At 3 I should add that the helpdesk informed me about a way to run the Client app so the complete log would be available for later reviewing (but by then many log entries were already lost).

During the conversation that I had with them, the TransIP helpdesk told me that Stack is based on OwnCloud. Something that would have been good to know from the start, so I could investigate the merits of OwnCloud and let that weigh in on my decision.

Interestingly, at the beginning of this process the TransIP helpdesk indicated that Stack is indeed a good replacement for Dropbox and how I could just point the Stack client app to the Dropbox folder.

But after exchanging several messages with them about various problems, they confirmed that Stack is not a good replacement yet for Dropbox. TransIP informed me that they are working on their own software to replace OwnCloud in Stack.

I hope this is not Proprietary software, because I believe it will be very hard for a small company like TransIP to beat Dropbox at their own game. I believe collaborating with other companies to solve this problem and continuing the Open Source work of NextCloud or similar initiative has a much better chance of succeeding, at a much lower price, with a much larger impact.

As final notes, the TransIP helpdesk has been great at assisting me in this process, they have been very responsive, transparent and showed expertise on the subject.

And I do not greatly enjoy writing this blogpost, I would much rather have written a raving review on how Stack replaced Dropbox for good!

Which service to try next?

– Diderik van Wingerden

This post first appeared on Think. Innovation.

NoFlo 0.8 is out now

Henri Bergius | 00:00, Thursday, 02 March 2017

After several months of work, NoFlo 0.8 is finally out as a stable release. This release is important in that it brings the Process API for NoFlo components to general availability, paving way for the 1.x series.

Process API

We introduced Process API in the 0.7 series last summer, but at that stage it wasn’t deemed good enough for production use. There were issues with features like bracket forwarding, as well as API convenience questions that we’ve since tackled. With NoFlo 0.8 onwards, Process API is the recommended way to write all components.

Here are some of the major features coming to Process API with 0.8:

  • hasData and getData methods for dealing with data packets in the inports
  • hasStream and getStream methods for dealing with complete streams of IPs in the inports
  • Calling output.done is now required to signal when component is done processing
  • Automatic bracket forwarding from and to multiple ports as defined in forwardBrackets configuration
  • Support for receiving packets from addressable ports using the [portname, idx] syntax

More information on how to use the API for writing NoFlo components can be found from the Process API guide I wrote back in January.

Big thanks yet again to Vladimir Sibirov for his work pushing Process API to production!

WirePattern

Before Process API, WirePattern was the preferred convenience method used for writing NoFlo components. It made it easy to synchronize data from multiple inports, and to manage lifecycle of asynchronous components. Process API was largely designed to address the learnings we’ve had from the years using WirePattern in production, both the conveniences and the pitfalls.

Since vast majority of current open source NoFlo components use WirePattern, it wasn’t feasible to simply go and deprecate the API. Instead, what we did in the 0.8 cycle was port WirePattern to actually run on top of Process API.

So, when you update to NoFlo 0.8, all WirePattern components will automatically switch to using Process API internally. This means that existing components and graphs should stay working as they always did, except now fully compatible with features like scope isolation, bracket forwarding, and the new network lifecycle.

In 0.8 series we still also ship the original WirePattern implementation, which can be enabled in two ways:

  • Passing a legacy: true as an option to the WirePattern function. This will cause that component to use the legacy WirePattern iimplementation
  • Setting NOFLO_WIREPATTERN_LEGACY environment variable to force all WirePattern components to use legacy mode

There should be no reason to use the legacy mode. If you find a backwards compatibility issue forcing you to do so in your projects, please file an issue.

Component and network lifecycle

Another area of focus for 0.8 was the network lifecycle. The legacy component API in NoFlo had no way for components to tell when they’re done processing data, and hence the network wasn’t able to determine accurately when it was finished.

With Process API we give components much better way to handle this — when you’re done processing, call output.done(). Until then the component is expected to be doing work. When all components have deactivated, the network is considered finished:

NoFlo program lifecycle

To support the lifecycle better, we also made both component and network star-up and shutdown asynchronous with callbacks. This ensures every node in a NoFlo network can do everything it needs to initialize or clean up at the right stage of the flow.

The network methods are:

  • network.start(callback) to start the network. This starts all components and sends the Initial Information Packets
  • network.stop(callback) to stop the network. This calls all components to shut down, closes connections between them, and clears any in-flight inport buffers

If your component needs to do anything special at start-up or shutdown, the new methods it can provide are:

  • component.setUp(callback) called when network starts. Component can do self-initialization but should not send anything at this stage
  • component.tearDown(callback) called when network stops. Stateful components can clean their state at this point, and generators should stop generating (remove event listeners, shut down socket connections, etc)

The tearDown method replaces the old shutdown method. While shutdown method still gets called, all components should migrate to the async-capable tearDown in this release cycle.

Deprecated features

The 0.8 series adds deprecation warnings to features that will be removed from the eventual NoFlo 1.x release. You can find a full list from the ChangeLog, but here are some notable ones:

  • Synchronous WirePattern usage (switch to async: true or port to Process API)
  • noflo.AsyncComponent and noflo.helpers.MapComponent (should be ported to Process API)
  • noflo.InPort legacy packet methods process, receive and contains
  • Legacy noflo.Port and noflo.ArrayPort implementations. Use the modern noflo.InPort and noflo.OutPort instead
  • component.error and component.fail methods. Use WirePattern or Process API error callback instead

By default using a deprecated feature only logs a warning. If you want to make these fatal, set the NOFLO_FATAL_DEPRECATED environment variable. This can be useful for Continuous Integration.

Getting 0.8

NoFlo 0.8.x releases can be found from NPM.

  • For Node.js projects, the noflo-nodejs command-line runner has been updated to 0.8
  • For browser projects, noflo-browser-app scaffolding has been updated to 0.8

As I write this, the hosted browser runtime is still on 0.7. We will hopefully get it updated shortly.

Friday, 03 March 2017

Okular Form Field auto-updating (Work In Progress)

TSDgeos' blog | 00:24, Friday, 03 March 2017

You can see it in https://www.youtube.com/watch?v=fCLFkpaW3Ug

As the description of the YouTube video says:

Form 14 updates from Form 13 values as defined by the PDF file.
There's a few bugs left:
* To make the page contents update i need to edit another form in the page of the form that is being auto updated
* The contents of the "editable" Form are not updated. (The form is actually not editable since it's readonly)

And also a pile of uncommited and unreviewed patches, and probably only works for very simple files like this one, but it's a start :)


Update: It works fine now and everything has been commited :) https://www.youtube.com/watch?v=S-zmHc3WUhs

Friday, 24 February 2017

Building NoFlo browser applications with webpack

Henri Bergius | 00:00, Friday, 24 February 2017

I was looking at some of the Stack Overflow noflo questions yesterday, and there were a few related to building NoFlo for the browser. This made me realize we haven’t really talked about the major change we made to browser builds recently: webpack.

Originally NoFlo was designed to only run on Node.js — the name itself is a portmanteau for Node.js Flow. But in 2013 we added support for Component.js to package and run NoFlo and its dependencies also on the browser.

This enabled lots of exciting things to happen: NoFlo example projects people could run simply by opening them in the browser, and building full-fledged client applications like Flowhub in NoFlo.

Unfortunately Component.js foundered and was eventually deprecated. Luckily others were picking up the ball — Browserify and webpack were created to fulfill a similar purpose, the latter picking up a lot of momentum. In summer 2016 we jumped on the bandwagon and updated NoFlo’s browser build to webpack.

Compared to Component.js, this gave several advantages:

  • Separating module installation and builds — allowing us distribute also the browser modules via NPM
  • No need to maintain separate component.json manifests on top of the existing package.json
  • Pluggable loaders providing a nicer way to deal with CoffeeScript, .fbp, ES2015, and much more
  • Code splitting for cleaner and more modular builds

And of course all other benefits of a thriving ecosystem. There are tons of plugins and tutorials out there, and new features and capabilities are added constantly.

The build process

In nutshell, making a browser build of a NoFlo project involves the following steps:

  1. Find all installed browser-compatible components using the fbp-manifest tool
  2. Generate a custom NoFlo component loader that requires all the component files and registers them for NoFlo to load
  3. Configure and run webpack with the application entry point, replacing NoFlo’s standard component loader with the generated custom one

To automate these steps we have grunt-noflo-browser, a plugin for the Grunt task runner that we use for build automation.

Runtime annotations

Since we’re using NPM packages for distributing NoFlo modules for both Node.js and the browser, it is important to be able to communicate which platforms a component works with.

By default, we assume all components and graphs in a module work on both platforms. For many typical NoFlo cases this is true. However, some components may use interfaces that only work on one platform or the other — for example webcam or accelerometer access on the browser, or starting a web server on Node.js.

JSON graph files already have a standardized property for the environment information. But for textual files like components or .fbp graphs, we added support for runtime annotations via a @runtime comment string:

# This component only works on browser
# @runtime noflo-browser

or

# This component only works on Node.js
# @runtime noflo-nodejs

We also support a @name annotation for naming a component differently than its filename. This is useful if you want to provide the same component interface on both platforms, but need separate implementations for each:

Project scaffolding

For faster project setup, we have a template for creating NoFlo browser applications: noflo-browser-app. To use it, follow these steps:

  • Fork the project
  • Import the repository in Flowhub
  • Make changes, synchronize to GitHub
  • If you need additional modules, use npm install --add

The project contains a fully working build setup, including Travis CI compatible test automation. If you supply Travis CI a GitHub access token, all tagged versions of your app will automatically get published to GitHub Pages.

The default setup enables live-debugging the app in Flowhub via a WebRTC connection:

WebRTC debugging a NoFlo browser app

If you want to disable the WebRTC runtime option, simply pass a debug: false option to grunt-noflo-browser.

More details for using the template can be found from the project README.

Node.js bundles

In addition to building browser-runnable bundles of NoFlo projects, the grunt-noflo-browser plugin can — despite its name — be also used for building Node.js runnable bundles.

While a typical Node.js NoFlo project doesn’t require a build step, bundling everything into a single file has its advantages: it can reduce process start-up time drastically, especially in embedded devices with slow storage.

This is quite useful for example when building interactive installations with NoFlo. To make a Node.js build, add the following build configuration options:

noflo_browser:
  options:
    webpack:
      target: 'node'
      node:
        __dirname: true

In addition you’ll want to define any native NPM modules as webpack externals.

Current status

We’re using this setup in production with some Node.js applications, all browser-capable NoFlo modules, as well as the Flowhub web app.

If you want to get started, simply fork the noflo-browser-app repo and start drawing graphs!

Tuesday, 21 February 2017

Untangling the duality of Free Software and Open Source

free software - Bits of Freedom | 22:09, Tuesday, 21 February 2017

Untangling the duality of Free Software and Open Source

Back in December, John Mark Walker wrote an article on the relation between Free Software and Open Source. In his view, conflating Free Software and Open Source "is to undermine beliefs that are fundamental to free software and associated movement." The comments on his article revealed a different thinking from some: that Free Software and Open Source are inherently the same, and indeed, the FSFE has often made the point of treating the two as synonyms.

In this post, I will attempt to untangle the situation a bit more, and elaborate on why I believe talking about Open Source makes absolute sense, why you should do it with Free Software in mind, and why we still need people to talk about Free Software.

"Open" has entrenched itself as a term for transparency and processes where multiple authors' combined efforts lead to a result. There are fundamental differences between Open Innovation and Open Source. Between Open Data and Open Educational Resources. Between Open Research and Open Hardware. Between Open Access and Open Government.

But within each of their fields, these terms have taken on a meaning of their own and using them is a pre-requisite to being able to have a useful dialogue in those areas. An important part of any education is to learn the terminology used in a field. Words has the power to include and exclude. When a lawyer talks about habeas corpus, a craftsman talks about the need for a "2 by 4" or your plumber talks about a tank cross, they are excluding those who are not familiar with the terminology used in the trade.

In the computing industry today, Open Source has become the de-facto used term for Free Software. Many who've come into our movement in the last couple of years don't even know the term Free Software. Speaking about Free Software, even if we mean it as a synonym for Open Source, directly puts us at odds with parts of the community.

Talking about Open Source makes sense.

But there's something missing. Ever since the first days when the term Open Source was coined, Richard Stallman has made the point that Open Source misses the point of Free Software. I believe he's right. There are certainly parts of the Open Source ecosystem I'm not terribly happy about.

I do believe we could do more about end user freedoms and I believe locking end users out from the ability to use the four freedoms in the hardware and software they're using is damaging. I believe we need to push the borders of freedom, to challenge ourselves to adapt more copyleft software, and to tell people why society as a whole benefits.

In essence, even when talking about Open Source, we need to push the borders. We need to make sure to include the thinking that went into this from the very first days of the movement, long before the term Open Source was coined.

We need to keep the ideals of the Free Software movement in mind, even when talking about Open Source.

But doing this is tricky if you don't actually know what the goals are. When I'm learning something new, I need my teacher to approach me at the level where I currently am in my studies. Not with the idea of stopping at that level, but to be able to bring me up to where I need to be. And I need someone to show me where that is.

In Open Source, we need someone to remind us about the ideals of the Free Software movement. About the philosophy that helped found this movement in the early 1980s.

That is, beyond a doubt, the role of the Free Software Foundation. To show us what we should be aiming for in our work. And in that role, it makes perfect sense to talk about Free Software.

We need people to talk about Free Software.

Where does this leave us in terms of Open Source and Free Software? Are they synonyms? Are they antonyms? Are there different philosophies behind them? You would probably get as many answers as there are people.

We haven't seen an end of this debate in the last 18 years and I doubt we ever will. But I do believe we've seen the debate become less important. As more individuals, governments and organisations adopt free and open source software, it becomes less important how they refer to it, and more important how they act as members of the community. Organisations like the FSF, embodying the philosophy of the Free Software movement, have an important role to play in this.

Monday, 20 February 2017

Three new FOSS umbrella organisations in Europe

Hook’s Humble Homepage | 22:00, Monday, 20 February 2017

Last year, three new umbrella organisations for free and open-source software (and hardware) projects emerged in Europe. Their aim is to cater to the needs of the community by providing a legal entity for projects to join, leaving the projects free to focus on technical and community tasks. These organisations (Public Software CIC, [The Commons Conservancy], and the Center for the Cultivation of Technology) will take on the overhead of actually running a legal entity themselves.

Among other services, they offer to handle donations, accounting, grants, legal compliance, or even complex governance for the projects that join them. In my opinion (and, seemingly, theirs) such services are useful to these kinds of projects; some of the options that these three organisations bring to the table are quite interesting and inventive.

The problem

As a FOSS or OSHW project grows, it is likely to reach a point where it requires a legal entity for better operation – whether to gather donations, pay for development, handle finances, organise events, increase license predictability and flexibility by consolidating rights, help with better governance, or for other reasons. For example, when a project starts to hold assets – domain names, trade marks, or even just receives money through donations – that should not be the responsibility of one single person, but should, instead, be handled by a legal entity that aligns with the project’s goals. A better idea is to have an entity to take over this tedious, but needed, overhead from the project and let the contributors simply carry on with their work.

So far, the options available to a project are either to establish its own organisation or to join an existing organisation, neither of which may fit well for the project. The existing organisations are either specialised in a specific technology or one of the few technology-neutral umbrella organisations in the US, such as Software in the Public Interest, the Apache Software Foundation, or the Software Freedom Conservancy (SFC). If there is already a technology-specific organisation (e.g. GNOME Foundation, KDE e.V., Plone Foundation) that fits a project’s needs, that may well make a good match.

The problem with setting up a separate organisation is that it takes ongoing time and effort that would much better be spent on the project’s actual goals. This goes double and quadruple for running it and meeting with the annual official obligations – filling out tax forms, proper reporting, making sure everything is in line with internal rules as well as laws, and so on. To make matters worse, failure to do so might result in personal liability for the project leaders that can easily reach thousands or tens of thousands of euros or US dollars.

Cross-border donations are tricky to handle, can be expensive if a currency change is needed, and are rarely tax-deductible. If a project has most of its community in Europe, it would make sense to use a European legal entity.

What is common between all three new European organisations is that none demand a specific outbound license for the projects they manage (as opposed to the Apache Software Foundation, for example), as long as it falls under one of the generally accepted free and open licenses. The organisations must also have internal rules that bind them to act in the public interest (which is the closest approximation to FOSS you can get when it comes to government authorities). Where they differ is the set of services they offer and how much governance oversight they provide.

Public Software CIC

Public Software CIC incorporated in February 2016 as a UK-based Community Interest Company. It is a fiduciary sponsor and administrative service provider for free and open source projects – what it calls public software – in Europe.

While it is not for profit, a Community Interest Company (CIC) is not a charity organisation; the other two new organisations are charities. In the opinion of Public Software’s founders, the tax-deductibility that comes with a charitable status does not deliver benefits that outweigh the limitations such a status brings for smaller projects. Tax recovery on cross-border charitable donations is hard and expensive even where it is possible. Another typical issue with charities is that even when for-profit activities (e.g. selling T-shirts) are allowed, these are throttled by law and require more complex accounting – this situation holds true both for most European charities and for US 501(c)(3) charitable organisations.

Because Public Software CIC is not a charity, it is allowed to trade and has to pay taxes if it has a profit at the end of its tax year. But as Simon Phipps, one of the two directors, explained at a panel at QtCon / FSFE Summit in September 2016, it does not plan to have any profits in the first place, so that is a non-issue.

While a UK CIC is not a charity and can trade freely, by law it still has to strictly act for public benefit and, for this reason, its assets and any trading surplus are locked. This means that assets (e.g. trade marks, money) coming into the CIC are not allowed to be spent or used otherwise than in the interests of the public community declared at incorporation. For Public Software, this means the publicly open communities using and/or developing free and open-source software (i.e. public software). Compliance with the public interest for a CIC also involves approval and monitoring by the Commissioner for Community Interest Companies, who is a UK government official.

The core services Public Software CIC provides to its member projects are:

  • accounting, including invoicing and purchasing
  • tax compliance and reporting
  • meeting legal compliance
  • legal, technical, and governance advice

These are covered by the base fee – 10% of project’s income. This percentage seems to have become the norm (e.g. SFC charges the same). Public Software will also offer additional services (e.g. registering and holding a trade mark or domain name), but for these there will be additional fees to cover costs.

On the panel at QtCon, Phipps mentioned that it would also handle grants, including coordinating and reminding its member projects of deadlines to meet. But it would not write reports for the grants nor would it give loans against future payments from grants. Because many (especially EU) grants only pay out after the sponsored project comes to fruition, a new project that is seeking these grants should take this restriction into consideration.

Public Software CIC already hosts a project called Travel Spirit as a member and has a few projects waiting in the pipeline. While its focus is mainly on newly starting projects, it remains open to any project that would prefer a CIC. At QtCon, Phipps said that he feels it would be the best fit for smaller-scale projects that need help with setting up governance and other internal project rules. My personal (and potentially seriously wrong) prediction is that Public Software CIC would be a great fit for newly-established projects where a complex mishmash of stake holders would have to be coordinated – for example public-private collaborations.

A distinct feature of Public Software CIC is that it distinguishes between different intangible assets/rights and has different rules for them. The basic premise for all asset types is that no other single organisation should own anything from the member project; Public Software is not interested in being a “front” for corporate open source. But then the differences begin. Public Software CIC is perfectly happy and fit to hold trade marks, domain names, and such for its member projects (in fact, if a project holds a trade mark, Public Software would require a transfer). But on the other hand, it holds a firm belief that copyright should not be aggregated by default and that every developer should hold the rights to their own contribution if they are willing.

Apart from FOSS, the Public Software CIC is also open to open-source hardware or any free-culture projects joining. The ownership constraint might in practice prove troublesome for hardware projects, though.

Public Software CIC does not want to actively police license/copyright enforcement, but would try to assist a member project if it became necessary, as far as funds allowed. In fact when a project signs the memorandum of understanding to join the Public Software CIC, the responsibility for copyright enforcement explicitly stays with the project and is not transferred to the CIC. On the other hand, it would, of course, protect the other assets that it holds for a project (e.g. trade marks).

If a project wants to leave at some point, all the assets that the CIC held for it have to go to another asset-locked organisation approved by the UK’s Commissioner of CICs. That could include another UK CIC or charity, or an equivalent entity elsewhere such as a US 501(c)(3).

If all goes wrong with the CIC – due to a huge judgment against one of its member projects or any other reason – the CIC would be wound down and all the remaining member projects would be spun out into other asset-locked organisation(s). Any remaining assets would be transferred to the FSFE, which is also a backer of the CIC.

[The Commons Conservancy]

[The Commons Conservancy] (TCC) incorporated in October 2016 and is an Amsterdam-based Stichting, which is a foundation under Dutch law. TCC was set up by a group of technology veterans from the FOSS, e-science, internet-community, and digital-heritage fields. Its design and philosophy reflects lessons learned in over two decades of supporting FOSS efforts of all sizes in the realm of networking and information technology. It is supported by a number of experienced organisations such as NLnet Foundation (a grant-making organisation set up in the 1980s by pioneers of the European internet) and GÉANT (the European association of national education and research networks).

As TCC’s chairman Michiel Leenaars pointed out in the QtCon panel, the main goal behind TCC is to create a no-cost, legally sound mechanism to share responsibility for intangible assets among developers and organisations, to provide flexible fund-raising capabilities, and to ensure that the projects that join it will forever remain free and open. For that purpose it has invented some rather ingenious methods.

TCC concentrates on a limited list of services it offers, but wants to perfect those. It also aims at being lightweight and modular. As such, the basic services it offers are:

  • assurance that the intangible assets will forever remain free and open
  • governance rules with sane defaults (and optional additions)
  • status to receive charitable donations (to an account at a different organisation)

TCC requires from its member projects only that their governance and decision-making processes are open and verifiable, and that they act in the public benefit. For the rest, it allows the member projects much freedom and offers modules and templates for governance and legal documents solely as an option. The organisation strongly believes that decisions regarding assets and money should lie with the project, relieving the pressure and dependency on individuals. It promotes best practices but tries to keep out of the project’s decisions as much as possible.

TCC does not require that it hold intangible assets (e.g. copyrights, trade marks, patents, design rights) of projects, but still encourages that the projects transfer them to TCC if they want to make use of the more advanced governance modules. The organisation even allows the project to release binaries under a proprietary license, if needed, but under the strict condition that a full copy of the source code must forever remain FOSS.

Two of the advanced modules allow for frictionless sharing of intangible assets between member projects regardless whether the outbound licenses of these projects are compatible or not. The “Asset Sharing DRACC”] (TCC calls its documents “Directives and Regulatory Archive of [The Commons Conservancy]” or DRACC) enables developers to dedicate their contributions to several (or all) member projects at the same time. The “Programme Forking DRACC” enables easy sharing of assets between projects when a project forks, even though the forks might have different goals and/or outbound licenses.

As further example, the “Hibernation of assets DRACC” solves another common issue – namely how to ensure a project can flourish even after the initial mastermind behind it is gone. There are countless projects out there that stagnated because their main developer lost interest, moved on, or even died. In this module there are special rules in place to handle a project that has fallen dormant and how the community can revive a project afterwards to simply continue the development. There are more such optional rule sets available for projects to adopt; including rules how to leave TCC and join a different organisation.

This flexibility is furthered by the fact that by design TCC does not tie the project to any money-related services. To minimise risks, [The Commons Conservancy] does not handle money at all – its statutes literally even forbid it to open a bank account. Instead, it is setting up agreements with established charitable entities that are specialised in handling funds. The easiest option would be to simply use one of these charities to handle the project’s financial back-end (e.g. GÉANT has opted for NLnet Foundation), but projects are free to use any other financial back-end if they so desire.

Not only is the service TCC offers compatible with other services, it is also free as in beer, so using TCC’s services in parallel with some other organisation to handle the project’s finances does not increase a project’s costs.

TCC is able to handle projects that receive grants, but will not manage grants itself. There are plans to set up a separate legal entity to handle grants and other activities such as support contracts, but nothing is set in stone yet. For at least a subset of projects it would also be possible to apply for loans in anticipation of post-paid (e.g. EU) grants through NLnet.

A project may easily leave TCC whenever it wants, but there are checks and balances set in place to ensure that the project remains free and open even if it spins out to a new legal entity. An example is that a spun out (or “Graduated” as it is called in TCC) project leaves a snapshot of itself with TCC as a backup. Should the new entity fail, the hibernated snapshot can then be revived by the community.

TCC is not limited to software – it is very much open to hosting also open hardware and other “commons” efforts such as open educational resources.

TCC does not plan to be involved in legal proceedings – whether filing or defending lawsuits. Nor is it an interesting target, simply because it does not take in or manage any money. If anything goes wrong with a member project, the plan is to isolate that project into a separate legal entity and keep a (licensed) clone of the assets in order to continue development afterwards if possible.

Given the background of some of the founders of TCC (with deep roots in the beginnings of the internet itself), and the memorandum of understanding with GÉANT and NREN, it is not surprising that some of the first projects to join are linked to research and core network systems (e.g. eduVPN and FileSender). Its offering seems to be an interesting framework for already existing projects that want to ensure they will remain free and open forever; especially if they have or anticipate a wider community of interconnected projects that would benefit from the flexibility that TCC offers.

The Center for the Cultivation of Technology

The Center for the Cultivation of Technology (CCT) also incorporated in October 2016, as a German gGmbH, which is a non-profit limited-liability corporation. Further, the CCT is fully owned by the Renewable Freedom Foundation.

This is an interesting set-up, as it is effectively a company that has to act in public interest and can handle tax-deductible donations. It is also able to deal with for-profit/commercial work, as long as the profit is reinvested into its activities that are in public benefit. Regarding any activities that are not in the public interest, CCT would have to pay taxes. Of course, activities in the public interest have to represent the lion’s share in CCT.

Its owner, the Renewable Freedom Foundation, in turn is a German Stiftung (i.e. foundation) whose mission is to “protect and preserve civil liberties, especially in the digital landscape” and has already helped fund projects such as Tor, GNUnet, and La Quadrature du Net.

While a UK CIC and a German gGmbH are both limited-liability corporations that have to act in the public interest, they have somewhat different legal and tax obligations and each has its own specifics. CCT’s purpose is “the research and development of free and open technologies”. For the sake of public authorities it defines “free and open technologies” as developments with results that are made transparent and that, including design and construction plans, source code, and documentation, are made available free and without licensing costs to the general public. Applying this definition, the CCT is inclusive of open-source hardware and potentially other technological fields.

Similar to the TCC, the CCT aims to be as lightweight by default as possible. The biggest difference, though, is that the Center for the Cultivation of Technology is first and foremost about handling money – as such its services are:

  • accounting and budgeting
  • financial, tax and donor reporting
  • setting up and managing of donations (including crowd-funding)
  • grant management and reporting
  • managing contracts, employment and merchandise

The business model is similar to that of PS CIC in that, for basic services, CCT will be taking 10% from incoming donations and that more costly tasks would have to be paid separately. There are plans to eventually offer some services for free, which would be covered by grants that CCT would apply for itself. In effect, it wants to take over the whole administrative and financial overhead from the project in order to allow the projects to concentrate on writing code and managing themselves.

Further still, the CCT has taken upon itself automation, as much as possible, both through processes and software. If viable FOSS solutions are missing, it would write them itself and release the software under a FOSS license for the benefit of other FOSS legal entities as well.

As Stephan Urbach, its CEO, mentioned on the panel at QtCon, the CCT is not just able to handle grants for projects, but is also willing to take over reporting for them. Anyone who has ever partaken in an EU (or other) grant probably agrees that reporting is often the most painful part of the grant process. The raw data for the reports would, of course, still have to be provided by the project itself. But the CCT would then take care of relevant logistics, administration, and writing of the grant reports. The company is even considering offering loans for some grants, as soon as enough projects join to make the operations sustainable.

In addition, the Center for the Cultivation of Technology has a co-working office in Berlin, where member projects are welcome to work if they need office space. The CCT is also willing to facilitate in-person meetings or hackathons. Like the other two organisations, it has access to a network of experts and potential mentors, which it could resort to if one of its projects needed such advice.

Regarding whether it should hold copyright or not, the Center for the Cultivation of Technology is flexible, but at the very beginning it would primarily offer holding other intangible assets, such as domain names and trade marks. That being said, at least in the early phase of its existence, holding and managing copyright is not the top priority. Therefore the CCT has for now deferred the decision regarding its position on license enforcement and potential lawsuit strategy. Accounting, budgeting, and handling administrative tasks, as well as automation of them all, are clearly where its strengths lie and this is where it initially wants to pour most effort into.

Upon a dissolution of the company, its assets would fall to Renewable Freedom Foundation.

Since the founders of CCT have deep roots in anonymity and privacy solutions such as Tor, I imagine that from those corners the first wave of projects will join. As for the second wave, it seems to me that CCT would be a great choice for projects that want to offload as much of financial overhead as possible, especially if they plan to apply for grants and would like help with applying and reporting.

Conclusion

2016 may not have been the year of the Linux desktop, but it surely is the year of FOSS umbrella organisations. It is an odd coincidence that at the same time three so different organisations have popped up in Europe – initially oblivious of each other – to provide much-needed services to FOSS projects.

Not only are FOSS projects spoiled for choice regarding such service providers in Europe, now, but it is refreshing to see that these organisations get along so well from the start. For example, Simon Phipps is also an adviser at CCT and I help with both CCT and TCC.

In fact, I would not be surprised to see, instead of bitter competition, greater collaboration between them, allowing each to specialise in what it does best and allowing the projects to mix-and-match services between them. For example, I can see how a project might want to pick TCC to handle its intangible assets, and at the same time use CCT to handle its finances. All three organisation have also stated that, should a project contact them that they feel would be better handled by one of the others, they would refer it to that organisation instead.

Since at least the legal and governance documents for CCT and TCC will be available on-line under a free license (CC0-1.0 and CC-By-4.0 respectively), cross-pollination of ideas and even setting up of new organisations would hereby be made easier. It may be early days for these three umbrella organisations, but I am quite optimistic about their usefulness and that they will fill in the gaps left open by the older US siblings and single-project organisations.

Update: TCC’s DRACC are already publicly available on-line.

If a project comes to the conclusion that it might need a legal entity, now is a great time to think about it. At FOSDEM 2017 there was another panel with CCT, TCC, PS CIC, and SFC where further questions and comments were asked.


Disclaimer: At the time of writing, I am working closely with two of the organisations – as the General Counsel of the Center for the Cultivation of Technology, and as co-author of the legal and governance documents (the DRACC) of [The Commons Conservancy]. This article does not constitute the official position of either of the two organisations nor any other I might be affiliated with.

Note: This article first appeared in LWN on 1 February 2017. This here is a slightly modified and updated version of it.


hook out → coming soon: extremely exciting stuff regarding the FLA 2.0

Friday, 17 February 2017

Redux-style middleware with NoFlo

Henri Bergius | 00:00, Friday, 17 February 2017

This post talks about some useful patterns for dataflow architecture in NoFlo web applications. We’re using these concepts to build Flowhub, the flow-based programming IDE.

Flux is an application architecture for web applications published by Facebook back in 2014. It uses a unidirectional data flow heavily inspired by flow-based programming concepts — events are sent from views to a dispatcher, which directs them to the appropriate data stores. The stores modify application state based on these events, and send updated state back to the view.

This structure allows us to reason easily about our application in a way that is reminiscent of functional reactive programming, or more specifically data-flow programming or flow-based programming, where data flows through the application in a single direction — there are no two-way bindings. Application state is maintained only in the stores, allowing the different parts of the application to remain highly decoupled. Where dependencies do occur between stores, they are kept in a strict hierarchy, with synchronous updates managed by the dispatcher.

Given its nature, the Flux pattern is quite easy to implement in NoFlo. Here is an example of a simple web-based TODO list using a Flux-esque NoFlo graph communicating with a React component:

Flux-style dataflow in NoFlo

In the image above you can see the graph in the middle, with the rendered React application on the right, and on the left an edge inspector showing the packets flowing from the view to the dispatcher.

In this example we decided to use bracket IPs to convey the action type. This allows any payload to be sent as the action, and usage of a standard NoFlo router component for packet dispatching.

Problems with Flux in NoFlo

We’ve been following a very similar Flux-like pattern as in the example above also in Flowhub, a flow-based programming IDE implemented in NoFlo. Over time the flows started becoming messy because:

  • Different stores would need access to different parts of application state
  • Some stores needed to generate their own actions
  • Some actions would need to pass through multiple stores

Since the only way to transmit information between components in NoFlo is to sent them as packets along a connection, these interdependencies cause a lot of wiring back and forth. Visual spaghetti code!

To find a better approach, I sat down couple of months ago with Moritz, a former colleague who has done quite a bit of work with both Flux and Redux. He suggested looking at Redux middleware as a pattern to follow.

Introducing middleware

Redux is a recent refinement on the Flux pattern that has become quite popular. One of the concepts it adds on top of Flux is middleware, something that is more common in server-side programming frameworks like Express:

Redux middleware solves different problems than Express or Koa middleware, but in a conceptually similar way. It provides a third-party extension point between dispatching an action, and the moment it reaches the reducer. People use Redux middleware for logging, crash reporting, talking to an asynchronous API, routing, and more.

Middleware can be chained so that all actions pass through each of them. A middleware that receives an action can either pass it on (maybe logging it on the way), or capture it and send out new actions instead.

Here is how a middleware looks like as a NoFlo component:

Redux-style middleware as NoFlo component

Actions arrive at the in port. If the middleware passes them along, it will send them via the pass port, and if it instead generates new actions, these will be sent via the new port.

With this structure, chaining becomes very simple:

Chaining NoFlo middleware

The image above is from the main graph of Flowhub. In Flowhub we have both very simple middleware, like the logger that just writes the event details to the developer console, and more complex ones like the UserMiddleware that deals with user information and OAuth, or RuntimeMiddleware that handles communications with FBP runtimes.

The middleware themselves can be generator components, listening for external events and creating new actions based on them. This way for example the UrlMiddleware generates new actions when the application URL changes, allowing the other middleware then load the appropriate content for that particular screen.

Actions and state

In the Flowhub graph, the actions are sent as NoFlo information packets from the view to the middleware chain. The packet flow of an action looks like the following:

< github
< pull
{
  "payload": {
    "repo": "noflo/noflo-ui"
  },
  "state": {
    // The application state when the action was triggered
  }
}
>
>

The brackets surrounding the action payload tell the action type, in this case github:pull. The data packet contains both the actual action payload, and the application state as it was when the action was triggered.

Including the state object means that each middleware can access the parts of the application state is in interested in while dealing with the action, removing interdependencies between them.

It also means that middleware become super easy to test, as you can send any kind of state/action combinations to exercise different flow paths.

Some of our middleware tests

Current status

I’ve started introducing middleware quite carefully to the Flowhub code base, so right now we’re running a mix of old-style Fluxified stores and new-style middleware/reducer combinations. The idea is to migrate different parts of the flow to the new pattern subgraph-by-subgraph as we fix bugs and add features.

So far this pattern has felt quite comfortable to work with. It makes testing easier, and fits generally well in how NoFlo does FBP.

If you’d like to try building something with Redux-style middleware and NoFlo, it is a good idea to take a peek at the middleware graphs in the Flowhub repo. And if you have questions or comments, get in touch!

KDE Applications 17.04 Schedule finalized

TSDgeos' blog | 08:45, Friday, 17 February 2017

It is available at the usual place https://community.kde.org/Schedules/Applications/17.04_Release_Schedule

Dependency freeze is in 4 weeks and Feature Freeze in 5 weeks, so hurry up!

Wednesday, 15 February 2017

Freedom to repair.

tobias_platen's blog | 06:18, Wednesday, 15 February 2017

On the I love Free Software day iFixit posted an article “iFixit Loves Repair”. For me repair is freedom. The freedom to repair is just as important as Stallman’s four freedoms. I think that computers should come with free repair manuals.

Many Apple products are hard to impossible to repair. Repair at certified shops is expensive, but you can repair them yourself if you use tools and manuals from iFixit.
I once had a Mac that had a mechanical defect and I could not buy replacement parts and I did not have a repair manual at that time.
But I was able to change the battery without using any tools.
With newer hardware such as the iThings users cannot replace the battery without the use of a special screwdriver for the pentalobe screws.
In the Apple world everything is proprietary. The Lightning connector is only found on Apple hardware and incompatible with standardized USB ports.
There is also an authentification chip that implements a hardware DRM. By contrast the Fairphone uses USB which has a standardizes charging protocol.
You just need to add an 200 Ohms resistor between the data lines and connect the two power lines to 5 Volts. I once built this circuit on a breadboard.
Changing the battery is easy, no tools are needed.

Fairphone Fixed

On many laptops it is not easy to replace the harddisk, keyboard or RAM. But on some ThinkPads supported by libreboot you have only to remove four screws to replace the keyboard.
Flashing libreboot for the first time requires removing more screws, but this only needs to be done once.

Open Thinkpad running libreboot. This little chip here is the keyboard controller

Tuesday, 14 February 2017

The most sincere form of flattery

English – Björn Schießle's Weblog | 10:51, Tuesday, 14 February 2017

Looking for Freedom

CC BY-ND 2.0 by Daniel Lee

Nextcloud now exists for almost exactly 8 months. During this time we put a lot of efforts in polishing existing features and developing new functionality which is crucial to the success of our users and customers.

As promised, everything we do is Free Software (also called Open Source), licensed under the terms of the GNU APGLv3. This gives our users and customers the most possible flexibility and independence. The ability to use, study, share and improve the software also allows to integrate our software in other cloud solutions as long as you respect the license and we are happy to see that people make use of this rights actively.

Code appearing in other app stores

We are proud to see that the quality of our software is not only acknowledged by our own users but also by users of other cloud solutions. Recently more and more of our applications show up at the ownCloud App Store. For example the community driven News app or the Server Info app, developed by the Nextcloud GmbH. Additionally we have heard that our SAML authentication application is widely considered far better quality than other, even proprietary alternatives, and used by customers of our competitors in especially the educational market. All this is completely fine as long as the combination of both, our application and the rest of the server, is licensed under the terms of the GNU AGPLv3.

Not suitable for mixing with enterprise versions

While we can’t actively work on keeping our applications compatible with other cloud solutions, we welcome every 3rd party efforts on it. The only draw-back, most of the other cloud solutions out there make a distinction between home users and enterprises on a license level. While home users get the software under a Free Software license, compatible with the license of our applications, Enterprise customers don’t get the same freedom and independence and are therefore not compatible with the license we have chosen. This means that all the users who uses propriety cloud solutions (often hidden by the term “Enterprise version”) are not able to legally use our applications. We feel sorry for them, but of course a solution exists – get support from the people who wrote your software rather than a different company. In general, we would recommend buying support for real Free Software and not just Open Source marketing.

Of course we don’t want to sue people for copyright violation. But Frank choose the AGPL license 7 years ago on purpose and we want to make sure that the users of our software understand the license and it’s implications. In a nutshell, the GNU AGPLv3 gives you the right to do with the software whatever your want and most important all the entrepreneurial freedom and independence your business needs, as long as the combined work is again licensed under the GNU AGPLv3. By combining GNU AGPLv3 applications with a proprietary server, you violate this rule and thus the terms of the license. I hope that other cloud solutions are aware of this problem, created by their open-core business model, and take some extra steps to protect their customers from violating the license of other companies and individual contributors. For example by performing a license check before a application gets enabled.

Open Core is a bad model for software development

This is one of many problems arising from the usage of open core business models. It puts users on risk if they combine the proprietary part with Free Software, more about it can be read here. That’s why we recommend all enterprise and home users to avoid a situation where proprietary and free licensed software is combined. This is a legal minefield. We at Nextcloud decided to take a clear stance on it. Everything is Free Software and there is only one version of the software for both home users and enterprises. Thus allows every home user, customer or partner to use all applications available as long as they respect the license.

Thanks to Free Software contributors in the public service

Matthias Kirschner's Web log - fsfe | 07:40, Tuesday, 14 February 2017

Today is another "I love Free Software" day. People all over the world are again thanking Free Software contributors for their work: Activists in Berlin illuminated the Reichstag and other government buildings with messages about Free Software. The German FSFE team made sure that all members of the German parliament receive a letter attached to a flower (German) to remind them about the importance of Free Software, also having in mind the upcoming federal election. While I type there is much more going on, and I hope it will not stop until tomorrow.

Group working at the OGP workshop on the FOSS contributor policy template

I myself dedicate my this year's #ilovefs thank you to all the dedicated Free Software contributors in the public service all over the world. As a civil servant it is always way more convenient to do what everyone else always does (you might remember the old saying "Nobody gets fired for buying IBM"). Like in big corporations, changing software in the public administration means a lot of work, and the positive outcome will not be seen for many years. Maybe nobody will realise that you were the one people should thank in a few years.

That is why I deeply admire people contributing to Free Software in public administrations and enabling others to do so. In the long run this will increase their government's digital sovereignty.

A special thanks goes to an awesome working group, I met at the Open Government Partnership (OGP) Summit in Paris in December 2016. The group was coordinated by the French government and included people from governments all over the world, as well as people from Linux Foundation, the FSFE and other Free Software supporters. We worked together on the FOSS contributor policy template. Its mission is to:

  • Define a template Free Software contribution policy that governments can instantiate
  • Increase contributions from civil servants and subcontractors working for governments
  • Help governments interact and work together
  • Propose best practices on engaging with open-source communities and contribute new projects

So thank you to the people who are working on this and thank you very much to all the Free Software supporters in public administrations all over the world!

I love astroid! #ilovefs

English – Max's weblog | 07:30, Tuesday, 14 February 2017

Hugo and me declaring our love to astroid

You cannot imagine how long I’ve waited to write this blog post. Normally I’m not the bragging kind of guy but for this year’s edition of my „I love Free Software“ declaration articles (after 2014, 2015 and 2016) I just want to shout out to the world: I have the world’s best mail client: astroid!

Okay, maybe I’ll add two or three words to explain why I am so grateful to the authors of this awesome Free Software application. Firstly, I should note that until ~6 months ago I have used Thunderbird – extended with lots of add-ons but still a mail user agent that most of you will know. But with each new email and project it became obvious to me that I have to find a way to organise my tenthousands of mails in a better way: not folder-based but tag-based, but not to the expense of overview and comfort.

Thanks to Hugo I became aware of astroid, an application that unites my needs and is open to multiple workflows. Let’s read how astroid describes itself:

Astroid is a lightweight and fast Mail User Agent that provides a graphical interface to searching, display and composing email, organized in thread and tags. Astroid uses the notmuch backend for blazingly fast searches through tons of email. Astroid searches, displays and composes emails – and rely on other programs for fetching, syncing and sending email.

My currently unread and tagged emails

Astroid is roughly 3 years old, is based on sup, and is mainly developed by Gaute Hope, an awesome programmer who encourages people – also non-programmers like me – to engage in the small and friendly community.

Why is astroid so cool?

That’s one secret of astroid: it doesn’t try to catch up to programs that do certain jobs very well already. So astroid relies on external POP/IMAP fetching (e.g. offlineimap), SMTP server (e.g. msmtp), email indexing (notmuch), and mail editors (e.g. vim, emacs). This way, astroid can concentrate on offering a unique interface that unites many strenghts:

Saved searches on the left, a new editor window on the right

  • astroid encourages you to use tabs. Email threads open in a new tab, a newly composed message is a separate tab, as well as a search query. You won’t loose any information when you write an email while researching in your archive while keeping an eye on incoming unread mails. If your tab bar becomes too long, just open another astroid instance.
  • It can be used by either keyboard or mouse. Beginners value to have a similar experience as with mouse-based mail agents like Thunderbird, experts hunt through their mails with the configurable keyboard shortcuts.
  • Tagging of emails is blazingly fast and efficient. You can either tag single mails or whole email threads with certain keywords that you can freely choose. Astroid doesn’t impose a certain tagging scheme on its users.
  • astroid already included the possibility to read HTML or GPG-exncrypted emails. No need to create a demotivatingly huge configuration file like with mutt.
  • Theming your personal astoid is easy. The templates can be configured using HTML and CSS syntax.
  • It is expandable by Python and lua plugins.
  • It’s incredibly fast! Thunderbird or Evolution users will never have to bother with 20+ seconds startup time anymore. Efficiency hooray!

    On startup, I see my saved search queries

Because it is open to any workflow, you can also easily use astroid with rather uncommon workflows. I, personally, use a mix of folder- and tag-based sorting. My mail server automatically moves incoming mails to certain folders (mostly based on mailing lists) which is important to me because I often use my mobile phone that doesn’t include a tagging-based email client, too. But with my laptop I can add additional tags or tag unsorted mails. Based on these tags, I again sort these mails to certain folders to reduce the amount of mails lying around in my unsorted inbox. Such a strange setup would have been impossible with many other email agents but with astroid (almost) everything is possible.

Did I convince you? Well, certainly not. Switching one’s email client is a huge step because for most people it involves changing the way how most of theor digital communication happens. But hopefully I convinced you to have a look at astroid and think about whether this awesome client may fulfill some of your demands better than your existing one. If you already use notmuch, a local SMTP server, offlineimap, procmail or other required parts, testing astroid will be very easy for you. And if your way to using astroid will be longer, as mine was, feel free to ask me or the helpful community.

PS: FSFE activists in Berlin carried out two awesome activities for ILoveFS!

LDAP User Authentication on CentOS 7

Evaggelos Balaskas - System Engineer | 00:19, Tuesday, 14 February 2017

prerequisites

You need to already have a LDAP instance in your infrastructure that you can reach from your test linux machine. Your ldap has an organization unit for people and one for groups.

Ldap server conf

It is always a good thing to write your notes/settings beforehand:

Ldap Server: myldapserver.example.org
Domain Component: dc=example,dc=org

People base: ou=people,dc=example,dc=org
Group base: ou=Groups,dc=example,dc=org

bind user: userpam
bind pass: 1234567890

Installation

On your centos 7 machine, you have to install two packages:


# yum -y install nss-pam-ldapd

  Installing : nscd-2.17-157.el7_3.1.x86_64
  Installing : nss-pam-ldapd-0.8.13-8.el7.x86_64

local LDAP name service daemon

Edit the /etc/nslcd.conf file accordingly to your ldap setup.


# grep -Ev '#|^$' /etc/nslcd.conf
uid nslcd
gid ldap

uri ldap://myldapserver.example.org
base ou=people,dc=example,dc=org

ssl no
tls_cacertdir /etc/openldap/cacerts

This is the most basic configuration file, without any TLS (encryption) support, but for our test purposes is ok.

restart nslcd

Every time you change something to nslcd.conf file, you need to restart the service:

# systemctl restart nslcd

Name Service Switch

By default the Name Service Switch have ldap support for the below pam services:


# grep ldap /etc/nsswitch.conf

passwd:     files sss ldap
shadow:     files sss ldap
group:      files sss ldap
netgroup:   files sss ldap
automount:  files ldap

if not, just add it yourself. Just remember that the order is from left to right, that means your centos machine will first try to look in your local files, then to your System Security Services Daemon and finally to your ldap URI !

Testing

In this first step, the only way to test that your linux machine can talk to your linux server is via getent looking up on the passwd service:


# getent passwd | grep ebal

ebal:x:374:374:Evaggelos Balaskas:/home/ebal:/bin/bash

Ldap Bind Password

The above example is for anonymous bind against your ldap server. That means that secrets (as the password of the user) can not be viewed (actually tested it on the encrypted hash) as for that you need to bind to your ldap server with your credentials.


# egrep -v '^$|#' /etc/nslcd.conf

uid nslcd
gid ldap
uri ldap://myldapserver.example.org
base ou=people,dc=example,dc=org

binddn cn=userpam,dc=example,dc=org
bindpw 1234567890

ssl no
tls_cacertdir /etc/openldap/cacerts

restart nslcd


 # systemctl restart nslcd

Testing

Now it’s time for your first ssh login:

~> ssh testvm
ebal@testvm's password: 

Last login: Mon Feb 13 22:50:12 2017
/usr/bin/id: cannot find name for group ID 374

~>  id
uid=374(ebal) gid=374 groups=374

You can login without problem, but there is a warning for your group id.

Ldap Group Configuration

So, we need to add support for our group base on the nslcd configuration file:


# egrep -v '^$|#' /etc/nslcd.conf

uid nslcd
gid ldap
uri ldap://myldapserver.example.org
base ou=people,dc=example,dc=org
binddn cn=userpam,dc=example,dc=org
bindpw 1234567890

base group ou=Groups,dc=example,dc=org

ssl no
tls_cacertdir /etc/openldap/cacerts

restart nslcd

# systemctl restart nslcd

testing

We first test it against getent using the group service:

# getent group | grep 374
ebal:*:374

and after that, we can ssh again to our linux machine:

~> ssh testvm
ebal@testvm's password:
Last login: Mon Feb 13 23:14:42 2017 from testserver

~> id
uid=374(ebal) gid=374(ebal) groups=374(ebal)

Now it shows the group name without a problem.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog