Planet Fellowship (en)

Monday, 20 February 2017

Three new FOSS umbrella organisations in Europe

Hook’s Humble Homepage | 22:00, Monday, 20 February 2017

Last year, three new umbrella organisations for free and open-source software (and hardware) projects emerged in Europe. Their aim is to cater to the needs of the community by providing a legal entity for projects to join, leaving the projects free to focus on technical and community tasks. These organisations (Public Software CIC, [The Commons Conservancy], and the Center for the Cultivation of Technology) will take on the overhead of actually running a legal entity themselves.

Among other services, they offer to handle donations, accounting, grants, legal compliance, or even complex governance for the projects that join them. In my opinion (and, seemingly, theirs) such services are useful to these kinds of projects; some of the options that these three organisations bring to the table are quite interesting and inventive.

The problem

As a FOSS or OSHW project grows, it is likely to reach a point where it requires a legal entity for better operation – whether to gather donations, pay for development, handle finances, organise events, increase license predictability and flexibility by consolidating rights, help with better governance, or for other reasons. For example, when a project starts to hold assets – domain names, trade marks, or even just receives money through donations – that should not be the responsibility of one single person, but should, instead, be handled by a legal entity that aligns with the project’s goals. A better idea is to have an entity to take over this tedious, but needed, overhead from the project and let the contributors simply carry on with their work.

So far, the options available to a project are either to establish its own organisation or to join an existing organisation, neither of which may fit well for the project. The existing organisations are either specialised in a specific technology or one of the few technology-neutral umbrella organisations in the US, such as Software in the Public Interest, the Apache Software Foundation, or the Software Freedom Conservancy (SFC). If there is already a technology-specific organisation (e.g. GNOME Foundation, KDE e.V., Plone Foundation) that fits a project’s needs, that may well make a good match.

The problem with setting up a separate organisation is that it takes ongoing time and effort that would much better be spent on the project’s actual goals. This goes double and quadruple for running it and meeting with the annual official obligations – filling out tax forms, proper reporting, making sure everything is in line with internal rules as well as laws, and so on. To make matters worse, failure to do so might result in personal liability for the project leaders that can easily reach thousands or tens of thousands of euros or US dollars.

Cross-border donations are tricky to handle, can be expensive if a currency change is needed, and are rarely tax-deductible. If a project has most of its community in Europe, it would make sense to use a European legal entity.

What is common between all three new European organisations is that none demand a specific outbound license for the projects they manage (as opposed to the Apache Software Foundation, for example), as long as it falls under one of the generally accepted free and open licenses. The organisations must also have internal rules that bind them to act in the public interest (which is the closest approximation to FOSS you can get when it comes to government authorities). Where they differ is the set of services they offer and how much governance oversight they provide.

Public Software CIC

Public Software CIC incorporated in February 2016 as a UK-based Community Interest Company. It is a fiduciary sponsor and administrative service provider for free and open source projects – what it calls public software – in Europe.

While it is not for profit, a Community Interest Company (CIC) is not a charity organisation; the other two new organisations are charities. In the opinion of Public Software’s founders, the tax-deductibility that comes with a charitable status does not deliver benefits that outweigh the limitations such a status brings for smaller projects. Tax recovery on cross-border charitable donations is hard and expensive even where it is possible. Another typical issue with charities is that even when for-profit activities (e.g. selling T-shirts) are allowed, these are throttled by law and require more complex accounting – this situation holds true both for most European charities and for US 501(c)(3) charitable organisations.

Because Public Software CIC is not a charity, it is allowed to trade and has to pay taxes if it has a profit at the end of its tax year. But as Simon Phipps, one of the two directors, explained at a panel at QtCon / FSFE Summit in September 2016, it does not plan to have any profits in the first place, so that is a non-issue.

While a UK CIC is not a charity and can trade freely, by law it still has to strictly act for public benefit and, for this reason, its assets and any trading surplus are locked. This means that assets (e.g. trade marks, money) coming into the CIC are not allowed to be spent or used otherwise than in the interests of the public community declared at incorporation. For Public Software, this means the publicly open communities using and/or developing free and open-source software (i.e. public software). Compliance with the public interest for a CIC also involves approval and monitoring by the Commissioner for Community Interest Companies, who is a UK government official.

The core services Public Software CIC provides to its member projects are:

  • accounting, including invoicing and purchasing
  • tax compliance and reporting
  • meeting legal compliance
  • legal, technical, and governance advice

These are covered by the base fee – 10% of project’s income. This percentage seems to have become the norm (e.g. SFC charges the same). Public Software will also offer additional services (e.g. registering and holding a trade mark or domain name), but for these there will be additional fees to cover costs.

On the panel at QtCon, Phipps mentioned that it would also handle grants, including coordinating and reminding its member projects of deadlines to meet. But it would not write reports for the grants nor would it give loans against future payments from grants. Because many (especially EU) grants only pay out after the sponsored project comes to fruition, a new project that is seeking these grants should take this restriction into consideration.

Public Software CIC already hosts a project called Travel Spirit as a member and has a few projects waiting in the pipeline. While its focus is mainly on newly starting projects, it remains open to any project that would prefer a CIC. At QtCon, Phipps said that he feels it would be the best fit for smaller-scale projects that need help with setting up governance and other internal project rules. My personal (and potentially seriously wrong) prediction is that Public Software CIC would be a great fit for newly-established projects where a complex mishmash of stake holders would have to be coordinated – for example public-private collaborations.

A distinct feature of Public Software CIC is that it distinguishes between different intangible assets/rights and has different rules for them. The basic premise for all asset types is that no other single organisation should own anything from the member project; Public Software is not interested in being a “front” for corporate open source. But then the differences begin. Public Software CIC is perfectly happy and fit to hold trade marks, domain names, and such for its member projects (in fact, if a project holds a trade mark, Public Software would require a transfer). But on the other hand, it holds a firm belief that copyright should not be aggregated by default and that every developer should hold the rights to their own contribution if they are willing.

Apart from FOSS, the Public Software CIC is also open to open-source hardware or any free-culture projects joining. The ownership constraint might in practice prove troublesome for hardware projects, though.

Public Software CIC does not want to actively police license/copyright enforcement, but would try to assist a member project if it became necessary, as far as funds allowed. In fact when a project signs the memorandum of understanding to join the Public Software CIC, the responsibility for copyright enforcement explicitly stays with the project and is not transferred to the CIC. On the other hand, it would, of course, protect the other assets that it holds for a project (e.g. trade marks).

If a project wants to leave at some point, all the assets that the CIC held for it have to go to another asset-locked organisation approved by the UK’s Commissioner of CICs. That could include another UK CIC or charity, or an equivalent entity elsewhere such as a US 501(c)(3).

If all goes wrong with the CIC – due to a huge judgment against one of its member projects or any other reason – the CIC would be wound down and all the remaining member projects would be spun out into other asset-locked organisation(s). Any remaining assets would be transferred to the FSFE, which is also a backer of the CIC.

[The Commons Conservancy]

[The Commons Conservancy] (TCC) incorporated in October 2016 and is an Amsterdam-based Stichting, which is a foundation under Dutch law. TCC was set up by a group of technology veterans from the FOSS, e-science, internet-community, and digital-heritage fields. Its design and philosophy reflects lessons learned in over two decades of supporting FOSS efforts of all sizes in the realm of networking and information technology. It is supported by a number of experienced organisations such as NLnet Foundation (a grant-making organisation set up in the 1980s by pioneers of the European internet) and GÉANT (the European association of national education and research networks).

As TCC’s chairman Michiel Leenaars pointed out in the QtCon panel, the main goal behind TCC is to create a no-cost, legally sound mechanism to share responsibility for intangible assets among developers and organisations, to provide flexible fund-raising capabilities, and to ensure that the projects that join it will forever remain free and open. For that purpose it has invented some rather ingenious methods.

TCC concentrates on a limited list of services it offers, but wants to perfect those. It also aims at being lightweight and modular. As such, the basic services it offers are:

  • assurance that the intangible assets will forever remain free and open
  • governance rules with sane defaults (and optional additions)
  • status to receive charitable donations (to an account at a different organisation)

TCC requires from its member projects only that their governance and decision-making processes are open and verifiable, and that they act in the public benefit. For the rest, it allows the member projects much freedom and offers modules and templates for governance and legal documents solely as an option. The organisation strongly believes that decisions regarding assets and money should lie with the project, relieving the pressure and dependency on individuals. It promotes best practices but tries to keep out of the project’s decisions as much as possible.

TCC does not require that it hold intangible assets (e.g. copyrights, trade marks, patents, design rights) of projects, but still encourages that the projects transfer them to TCC if they want to make use of the more advanced governance modules. The organisation even allows the project to release binaries under a proprietary license, if needed, but under the strict condition that a full copy of the source code must forever remain FOSS.

Two of the advanced modules allow for frictionless sharing of intangible assets between member projects regardless whether the outbound licenses of these projects are compatible or not. The “Asset Sharing DRACC”] (TCC calls its documents “Directives and Regulatory Archive of [The Commons Conservancy]” or DRACC) enables developers to dedicate their contributions to several (or all) member projects at the same time. The “Programme Forking DRACC” enables easy sharing of assets between projects when a project forks, even though the forks might have different goals and/or outbound licenses.

As further example, the “Hibernation of assets DRACC” solves another common issue – namely how to ensure a project can flourish even after the initial mastermind behind it is gone. There are countless projects out there that stagnated because their main developer lost interest, moved on, or even died. In this module there are special rules in place to handle a project that has fallen dormant and how the community can revive a project afterwards to simply continue the development. There are more such optional rule sets available for projects to adopt; including rules how to leave TCC and join a different organisation.

This flexibility is furthered by the fact that by design TCC does not tie the project to any money-related services. To minimise risks, [The Commons Conservancy] does not handle money at all – its statutes literally even forbid it to open a bank account. Instead, it is setting up agreements with established charitable entities that are specialised in handling funds. The easiest option would be to simply use one of these charities to handle the project’s financial back-end (e.g. GÉANT has opted for NLnet Foundation), but projects are free to use any other financial back-end if they so desire.

Not only is the service TCC offers compatible with other services, it is also free as in beer, so using TCC’s services in parallel with some other organisation to handle the project’s finances does not increase a project’s costs.

TCC is able to handle projects that receive grants, but will not manage grants itself. There are plans to set up a separate legal entity to handle grants and other activities such as support contracts, but nothing is set in stone yet. For at least a subset of projects it would also be possible to apply for loans in anticipation of post-paid (e.g. EU) grants through NLnet.

A project may easily leave TCC whenever it wants, but there are checks and balances set in place to ensure that the project remains free and open even if it spins out to a new legal entity. An example is that a spun out (or “Graduated” as it is called in TCC) project leaves a snapshot of itself with TCC as a backup. Should the new entity fail, the hibernated snapshot can then be revived by the community.

TCC is not limited to software – it is very much open to hosting also open hardware and other “commons” efforts such as open educational resources.

TCC does not plan to be involved in legal proceedings – whether filing or defending lawsuits. Nor is it an interesting target, simply because it does not take in or manage any money. If anything goes wrong with a member project, the plan is to isolate that project into a separate legal entity and keep a (licensed) clone of the assets in order to continue development afterwards if possible.

Given the background of some of the founders of TCC (with deep roots in the beginnings of the internet itself), and the memorandum of understanding with GÉANT and NREN, it is not surprising that some of the first projects to join are linked to research and core network systems (e.g. eduVPN and FileSender). Its offering seems to be an interesting framework for already existing projects that want to ensure they will remain free and open forever; especially if they have or anticipate a wider community of interconnected projects that would benefit from the flexibility that TCC offers.

The Center for the Cultivation of Technology

The Center for the Cultivation of Technology (CCT) also incorporated in October 2016, as a German gGmbH, which is a non-profit limited-liability corporation. Further, the CCT is fully owned by the Renewable Freedom Foundation.

This is an interesting set-up, as it is effectively a company that has to act in public interest and can handle tax-deductible donations. It is also able to deal with for-profit/commercial work, as long as the profit is reinvested into its activities that are in public benefit. Regarding any activities that are not in the public interest, CCT would have to pay taxes. Of course, activities in the public interest have to represent the lion’s share in CCT.

Its owner, the Renewable Freedom Foundation, in turn is a German Stiftung (i.e. foundation) whose mission is to “protect and preserve civil liberties, especially in the digital landscape” and has already helped fund projects such as Tor, GNUnet, and La Quadrature du Net.

While a UK CIC and a German gGmbH are both limited-liability corporations that have to act in the public interest, they have somewhat different legal and tax obligations and each has its own specifics. CCT’s purpose is “the research and development of free and open technologies”. For the sake of public authorities it defines “free and open technologies” as developments with results that are made transparent and that, including design and construction plans, source code, and documentation, are made available free and without licensing costs to the general public. Applying this definition, the CCT is inclusive of open-source hardware and potentially other technological fields.

Similar to the TCC, the CCT aims to be as lightweight by default as possible. The biggest difference, though, is that the Center for the Cultivation of Technology is first and foremost about handling money – as such its services are:

  • accounting and budgeting
  • financial, tax and donor reporting
  • setting up and managing of donations (including crowd-funding)
  • grant management and reporting
  • managing contracts, employment and merchandise

The business model is similar to that of PS CIC in that, for basic services, CCT will be taking 10% from incoming donations and that more costly tasks would have to be paid separately. There are plans to eventually offer some services for free, which would be covered by grants that CCT would apply for itself. In effect, it wants to take over the whole administrative and financial overhead from the project in order to allow the projects to concentrate on writing code and managing themselves.

Further still, the CCT has taken upon itself automation, as much as possible, both through processes and software. If viable FOSS solutions are missing, it would write them itself and release the software under a FOSS license for the benefit of other FOSS legal entities as well.

As Stephan Urbach, its CEO, mentioned on the panel at QtCon, the CCT is not just able to handle grants for projects, but is also willing to take over reporting for them. Anyone who has ever partaken in an EU (or other) grant probably agrees that reporting is often the most painful part of the grant process. The raw data for the reports would, of course, still have to be provided by the project itself. But the CCT would then take care of relevant logistics, administration, and writing of the grant reports. The company is even considering offering loans for some grants, as soon as enough projects join to make the operations sustainable.

In addition, the Center for the Cultivation of Technology has a co-working office in Berlin, where member projects are welcome to work if they need office space. The CCT is also willing to facilitate in-person meetings or hackathons. Like the other two organisations, it has access to a network of experts and potential mentors, which it could resort to if one of its projects needed such advice.

Regarding whether it should hold copyright or not, the Center for the Cultivation of Technology is flexible, but at the very beginning it would primarily offer holding other intangible assets, such as domain names and trade marks. That being said, at least in the early phase of its existence, holding and managing copyright is not the top priority. Therefore the CCT has for now deferred the decision regarding its position on license enforcement and potential lawsuit strategy. Accounting, budgeting, and handling administrative tasks, as well as automation of them all, are clearly where its strengths lie and this is where it initially wants to pour most effort into.

Upon a dissolution of the company, its assets would fall to Renewable Freedom Foundation.

Since the founders of CCT have deep roots in anonymity and privacy solutions such as Tor, I imagine that from those corners the first wave of projects will join. As for the second wave, it seems to me that CCT would be a great choice for projects that want to offload as much of financial overhead as possible, especially if they plan to apply for grants and would like help with applying and reporting.

Conclusion

2016 may not have been the year of the Linux desktop, but it surely is the year of FOSS umbrella organisations. It is an odd coincidence that at the same time three so different organisations have popped up in Europe – initially oblivious of each other – to provide much-needed services to FOSS projects.

Not only are FOSS projects spoiled for choice regarding such service providers in Europe, now, but it is refreshing to see that these organisations get along so well from the start. For example, Simon Phipps is also an adviser at CCT and I help with both CCT and TCC.

In fact, I would not be surprised to see, instead of bitter competition, greater collaboration between them, allowing each to specialise in what it does best and allowing the projects to mix-and-match services between them. For example, I can see how a project might want to pick TCC to handle its intangible assets, and at the same time use CCT to handle its finances. All three organisation have also stated that, should a project contact them that they feel would be better handled by one of the others, they would refer it to that organisation instead.

Since at least the legal and governance documents for CCT and TCC will be available on-line under a free license (CC0-1.0 and CC-By-4.0 respectively), cross-pollination of ideas and even setting up of new organisations would hereby be made easier. It may be early days for these three umbrella organisations, but I am quite optimistic about their usefulness and that they will fill in the gaps left open by the older US siblings and single-project organisations.

Update: TCC’s DRACC are already publicly available on-line.

If a project comes to the conclusion that it might need a legal entity, now is a great time to think about it. At FOSDEM 2017 there was another panel with CCT, TCC, PS CIC, and SFC where further questions and comments were asked.


Disclaimer: At the time of writing, I am working closely with two of the organisations – as the General Counsel of the Center for the Cultivation of Technology, and as co-author of the legal and governance documents (the DRACC) of [The Commons Conservancy]. This article does not constitute the official position of either of the two organisations nor any other I might be affiliated with.

Note: This article first appeared in LWN on 1 February 2017. This here is a slightly modified and updated version of it.


hook out → coming soon: extremely exciting stuff regarding the FLA 2.0

Friday, 17 February 2017

Redux-style middleware with NoFlo

Henri Bergius | 00:00, Friday, 17 February 2017

This post talks about some useful patterns for dataflow architecture in NoFlo web applications. We’re using these concepts to build Flowhub, the flow-based programming IDE.

Flux is an application architecture for web applications published by Facebook back in 2014. It uses a unidirectional data flow heavily inspired by flow-based programming concepts — events are sent from views to a dispatcher, which directs them to the appropriate data stores. The stores modify application state based on these events, and send updated state back to the view.

This structure allows us to reason easily about our application in a way that is reminiscent of functional reactive programming, or more specifically data-flow programming or flow-based programming, where data flows through the application in a single direction — there are no two-way bindings. Application state is maintained only in the stores, allowing the different parts of the application to remain highly decoupled. Where dependencies do occur between stores, they are kept in a strict hierarchy, with synchronous updates managed by the dispatcher.

Given its nature, the Flux pattern is quite easy to implement in NoFlo. Here is an example of a simple web-based TODO list using a Flux-esque NoFlo graph communicating with a React component:

Flux-style dataflow in NoFlo

In the image above you can see the graph in the middle, with the rendered React application on the right, and on the left an edge inspector showing the packets flowing from the view to the dispatcher.

In this example we decided to use bracket IPs to convey the action type. This allows any payload to be sent as the action, and usage of a standard NoFlo router component for packet dispatching.

Problems with Flux in NoFlo

We’ve been following a very similar Flux-like pattern as in the example above also in Flowhub, a flow-based programming IDE implemented in NoFlo. Over time the flows started becoming messy because:

  • Different stores would need access to different parts of application state
  • Some stores needed to generate their own actions
  • Some actions would need to pass through multiple stores

Since the only way to transmit information between components in NoFlo is to sent them as packets along a connection, these interdependencies cause a lot of wiring back and forth. Visual spaghetti code!

To find a better approach, I sat down couple of months ago with Moritz, a former colleague who has done quite a bit of work with both Flux and Redux. He suggested looking at Redux middleware as a pattern to follow.

Introducing middleware

Redux is a recent refinement on the Flux pattern that has become quite popular. One of the concepts it adds on top of Flux is middleware, something that is more common in server-side programming frameworks like Express:

Redux middleware solves different problems than Express or Koa middleware, but in a conceptually similar way. It provides a third-party extension point between dispatching an action, and the moment it reaches the reducer. People use Redux middleware for logging, crash reporting, talking to an asynchronous API, routing, and more.

Middleware can be chained so that all actions pass through each of them. A middleware that receives an action can either pass it on (maybe logging it on the way), or capture it and send out new actions instead.

Here is how a middleware looks like as a NoFlo component:

Redux-style middleware as NoFlo component

Actions arrive at the in port. If the middleware passes them along, it will send them via the pass port, and if it instead generates new actions, these will be sent via the new port.

With this structure, chaining becomes very simple:

Chaining NoFlo middleware

The image above is from the main graph of Flowhub. In Flowhub we have both very simple middleware, like the logger that just writes the event details to the developer console, and more complex ones like the UserMiddleware that deals with user information and OAuth, or RuntimeMiddleware that handles communications with FBP runtimes.

The middleware themselves can be generator components, listening for external events and creating new actions based on them. This way for example the UrlMiddleware generates new actions when the application URL changes, allowing the other middleware then load the appropriate content for that particular screen.

Actions and state

In the Flowhub graph, the actions are sent as NoFlo information packets from the view to the middleware chain. The packet flow of an action looks like the following:

< github
< pull
{
  "payload": {
    "repo": "noflo/noflo-ui"
  },
  "state": {
    // The application state when the action was triggered
  }
}
>
>

The brackets surrounding the action payload tell the action type, in this case github:pull. The data packet contains both the actual action payload, and the application state as it was when the action was triggered.

Including the state object means that each middleware can access the parts of the application state is in interested in while dealing with the action, removing interdependencies between them.

It also means that middleware become super easy to test, as you can send any kind of state/action combinations to exercise different flow paths.

Some of our middleware tests

Current status

I’ve started introducing middleware quite carefully to the Flowhub code base, so right now we’re running a mix of old-style Fluxified stores and new-style middleware/reducer combinations. The idea is to migrate different parts of the flow to the new pattern subgraph-by-subgraph as we fix bugs and add features.

So far this pattern has felt quite comfortable to work with. It makes testing easier, and fits generally well in how NoFlo does FBP.

If you’d like to try building something with Redux-style middleware and NoFlo, it is a good idea to take a peek at the middleware graphs in the Flowhub repo. And if you have questions or comments, get in touch!

KDE Applications 17.04 Schedule finalized

TSDgeos' blog | 08:45, Friday, 17 February 2017

It is available at the usual place https://community.kde.org/Schedules/Applications/17.04_Release_Schedule

Dependency freeze is in 4 weeks and Feature Freeze in 5 weeks, so hurry up!

Wednesday, 15 February 2017

Freedom to repair.

tobias_platen's blog | 06:18, Wednesday, 15 February 2017

On the I love Free Software day iFixit posted an article “iFixit Loves Repair”. For me repair is freedom. The freedom to repair is just as important as Stallman’s four freedoms. I think that computers should come with free repair manuals.

Many Apple products are hard to impossible to repair. Repair at certified shops is expensive, but you can repair them yourself if you use tools and manuals from iFixit.
I once had a Mac that had a mechanical defect and I could not buy replacement parts and I did not have a repair manual at that time.
But I was able to change the battery without using any tools.
With newer hardware such as the iThings users cannot replace the battery without the use of a special screwdriver for the pentalobe screws.
In the Apple world everything is proprietary. The Lightning connector is only found on Apple hardware and incompatible with standardized USB ports.
There is also an authentification chip that implements a hardware DRM. By contrast the Fairphone uses USB which has a standardizes charging protocol.
You just need to add an 200 Ohms resistor between the data lines and connect the two power lines to 5 Volts. I once built this circuit on a breadboard.
Changing the battery is easy, no tools are needed.

Fairphone Fixed

On many laptops it is not easy to replace the harddisk, keyboard or RAM. But on some ThinkPads supported by libreboot you have only to remove four screws to replace the keyboard.
Flashing libreboot for the first time requires removing more screws, but this only needs to be done once.

Open Thinkpad running libreboot. This little chip here is the keyboard controller

Tuesday, 14 February 2017

The most sincere form of flattery

English – Björn Schießle's Weblog | 10:51, Tuesday, 14 February 2017

Looking for Freedom

CC BY-ND 2.0 by Daniel Lee

Nextcloud now exists for almost exactly 8 months. During this time we put a lot of efforts in polishing existing features and developing new functionality which is crucial to the success of our users and customers.

As promised, everything we do is Free Software (also called Open Source), licensed under the terms of the GNU APGLv3. This gives our users and customers the most possible flexibility and independence. The ability to use, study, share and improve the software also allows to integrate our software in other cloud solutions as long as you respect the license and we are happy to see that people make use of this rights actively.

Code appearing in other app stores

We are proud to see that the quality of our software is not only acknowledged by our own users but also by users of other cloud solutions. Recently more and more of our applications show up at the ownCloud App Store. For example the community driven News app or the Server Info app, developed by the Nextcloud GmbH. Additionally we have heard that our SAML authentication application is widely considered far better quality than other, even proprietary alternatives, and used by customers of our competitors in especially the educational market. All this is completely fine as long as the combination of both, our application and the rest of the server, is licensed under the terms of the GNU AGPLv3.

Not suitable for mixing with enterprise versions

While we can’t actively work on keeping our applications compatible with other cloud solutions, we welcome every 3rd party efforts on it. The only draw-back, most of the other cloud solutions out there make a distinction between home users and enterprises on a license level. While home users get the software under a Free Software license, compatible with the license of our applications, Enterprise customers don’t get the same freedom and independence and are therefore not compatible with the license we have chosen. This means that all the users who uses propriety cloud solutions (often hidden by the term “Enterprise version”) are not able to legally use our applications. We feel sorry for them, but of course a solution exists – get support from the people who wrote your software rather than a different company. In general, we would recommend buying support for real Free Software and not just Open Source marketing.

Of course we don’t want to sue people for copyright violation. But Frank choose the AGPL license 7 years ago on purpose and we want to make sure that the users of our software understand the license and it’s implications. In a nutshell, the GNU AGPLv3 gives you the right to do with the software whatever your want and most important all the entrepreneurial freedom and independence your business needs, as long as the combined work is again licensed under the GNU AGPLv3. By combining GNU AGPLv3 applications with a proprietary server, you violate this rule and thus the terms of the license. I hope that other cloud solutions are aware of this problem, created by their open-core business model, and take some extra steps to protect their customers from violating the license of other companies and individual contributors. For example by performing a license check before a application gets enabled.

Open Core is a bad model for software development

This is one of many problems arising from the usage of open core business models. It puts users on risk if they combine the proprietary part with Free Software, more about it can be read here. That’s why we recommend all enterprise and home users to avoid a situation where proprietary and free licensed software is combined. This is a legal minefield. We at Nextcloud decided to take a clear stance on it. Everything is Free Software and there is only one version of the software for both home users and enterprises. Thus allows every home user, customer or partner to use all applications available as long as they respect the license.

Thanks to Free Software contributors in the public service

Matthias Kirschner's Web log - fsfe | 07:40, Tuesday, 14 February 2017

Today is another "I love Free Software" day. People all over the world are again thanking Free Software contributors for their work: Activists in Berlin illuminated the Reichstag and other government buildings with messages about Free Software. The German FSFE team made sure that all members of the German parliament receive a letter attached to a flower (German) to remind them about the importance of Free Software, also having in mind the upcoming federal election. While I type there is much more going on, and I hope it will not stop until tomorrow.

Group working at the OGP workshop on the FOSS contributor policy template

I myself dedicate my this year's #ilovefs thank you to all the dedicated Free Software contributors in the public service all over the world. As a civil servant it is always way more convenient to do what everyone else always does (you might remember the old saying "Nobody gets fired for buying IBM"). Like in big corporations, changing software in the public administration means a lot of work, and the positive outcome will not be seen for many years. Maybe nobody will realise that you were the one people should thank in a few years.

That is why I deeply admire people contributing to Free Software in public administrations and enabling others to do so. In the long run this will increase their government's digital sovereignty.

A special thanks goes to an awesome working group, I met at the Open Government Partnership (OGP) Summit in Paris in December 2016. The group was coordinated by the French government and included people from governments all over the world, as well as people from Linux Foundation, the FSFE and other Free Software supporters. We worked together on the FOSS contributor policy template. Its mission is to:

  • Define a template Free Software contribution policy that governments can instantiate
  • Increase contributions from civil servants and subcontractors working for governments
  • Help governments interact and work together
  • Propose best practices on engaging with open-source communities and contribute new projects

So thank you to the people who are working on this and thank you very much to all the Free Software supporters in public administrations all over the world!

I love astroid! #ilovefs

English – Max's weblog | 07:30, Tuesday, 14 February 2017

Hugo and me declaring our love to astroid

You cannot imagine how long I’ve waited to write this blog post. Normally I’m not the bragging kind of guy but for this year’s edition of my „I love Free Software“ declaration articles (after 2014, 2015 and 2016) I just want to shout out to the world: I have the world’s best mail client: astroid!

Okay, maybe I’ll add two or three words to explain why I am so grateful to the authors of this awesome Free Software application. Firstly, I should note that until ~6 months ago I have used Thunderbird – extended with lots of add-ons but still a mail user agent that most of you will know. But with each new email and project it became obvious to me that I have to find a way to organise my tenthousands of mails in a better way: not folder-based but tag-based, but not to the expense of overview and comfort.

Thanks to Hugo I became aware of astroid, an application that unites my needs and is open to multiple workflows. Let’s read how astroid describes itself:

Astroid is a lightweight and fast Mail User Agent that provides a graphical interface to searching, display and composing email, organized in thread and tags. Astroid uses the notmuch backend for blazingly fast searches through tons of email. Astroid searches, displays and composes emails – and rely on other programs for fetching, syncing and sending email.

My currently unread and tagged emails

Astroid is roughly 3 years old, is based on sup, and is mainly developed by Gaute Hope, an awesome programmer who encourages people – also non-programmers like me – to engage in the small and friendly community.

Why is astroid so cool?

That’s one secret of astroid: it doesn’t try to catch up to programs that do certain jobs very well already. So astroid relies on external POP/IMAP fetching (e.g. offlineimap), SMTP server (e.g. msmtp), email indexing (notmuch), and mail editors (e.g. vim, emacs). This way, astroid can concentrate on offering a unique interface that unites many strenghts:

Saved searches on the left, a new editor window on the right

  • astroid encourages you to use tabs. Email threads open in a new tab, a newly composed message is a separate tab, as well as a search query. You won’t loose any information when you write an email while researching in your archive while keeping an eye on incoming unread mails. If your tab bar becomes too long, just open another astroid instance.
  • It can be used by either keyboard or mouse. Beginners value to have a similar experience as with mouse-based mail agents like Thunderbird, experts hunt through their mails with the configurable keyboard shortcuts.
  • Tagging of emails is blazingly fast and efficient. You can either tag single mails or whole email threads with certain keywords that you can freely choose. Astroid doesn’t impose a certain tagging scheme on its users.
  • astroid already included the possibility to read HTML or GPG-exncrypted emails. No need to create a demotivatingly huge configuration file like with mutt.
  • Theming your personal astoid is easy. The templates can be configured using HTML and CSS syntax.
  • It is expandable by Python and lua plugins.
  • It’s incredibly fast! Thunderbird or Evolution users will never have to bother with 20+ seconds startup time anymore. Efficiency hooray!

    On startup, I see my saved search queries

Because it is open to any workflow, you can also easily use astroid with rather uncommon workflows. I, personally, use a mix of folder- and tag-based sorting. My mail server automatically moves incoming mails to certain folders (mostly based on mailing lists) which is important to me because I often use my mobile phone that doesn’t include a tagging-based email client, too. But with my laptop I can add additional tags or tag unsorted mails. Based on these tags, I again sort these mails to certain folders to reduce the amount of mails lying around in my unsorted inbox. Such a strange setup would have been impossible with many other email agents but with astroid (almost) everything is possible.

Did I convince you? Well, certainly not. Switching one’s email client is a huge step because for most people it involves changing the way how most of theor digital communication happens. But hopefully I convinced you to have a look at astroid and think about whether this awesome client may fulfill some of your demands better than your existing one. If you already use notmuch, a local SMTP server, offlineimap, procmail or other required parts, testing astroid will be very easy for you. And if your way to using astroid will be longer, as mine was, feel free to ask me or the helpful community.

PS: FSFE activists in Berlin carried out two awesome activities for ILoveFS!

Free Software Ergo Sum

English Planet – Dreierlei | 00:01, Tuesday, 14 February 2017

Today is “I love Free Software”-day and I made some graphics to say thank you to everyone in and around Free Software. All pictures CC0.

If Rene Descartes would think about the secure foundation of knowledge in the digital communication, then this must be in Free Software. “Free Software Ergo Sum”.

Oil on Canvas:
<figure class="wp-caption aligncenter" id="attachment_1994" style="width: 580px"><figcaption class="wp-caption-text">René Descartes (1596 – 1650)</figcaption></figure>

Sticker:

Thinking about one of these wise sayings by Konfuzius, he would probably come up with: “The perfect tool? When you are free to use, study, share and improve it to your needs.”

<figure class="wp-caption aligncenter" id="attachment_2001" style="width: 580px"><figcaption class="wp-caption-text">Konfuzius</figcaption></figure>

Recently in Athens:

<figure class="wp-caption aligncenter" id="attachment_2017" style="width: 580px"><figcaption class="wp-caption-text">La scuola di Atene</figcaption></figure>

After all these deep thoughts, some popcorn:

Your author:

<figure class="wp-caption aligncenter" id="attachment_2022" style="width: 580px"><figcaption class="wp-caption-text">#ilovefs</figcaption></figure>

All pictures CC0, you are welcome to reuse them and I appreciate when you send me a link.

Related posts:

Sunday, 12 February 2017

Mobile-ish devices as freedom respecting working environments

Elena ``of Valhalla'' | 10:05, Sunday, 12 February 2017

Mobile-ish devices as freedom respecting working environments

On planet FSFE, there is starting to be a conversation on using tablets / Android as the main working platform.

It started with the article by Henri Bergius http://bergie.iki.fi/blog/working-on-android-2017/ which nicely covers all practical points, but is quite light on the issues of freedom.

This was rectified by the article by David Boddie http://www.boddie.org.uk/david/www-repo/Personal/Updates/2017/2017-02-11.html which makes an apt comparison of Android to “the platform it is replacing in many areas of work and life: Microsoft Windows” and criticises its lack of effective freedom, even when the OS was supposed to be under a free license.

I fully agree that lightweight/low powered hardware can be an excellent work environment, especially when on the go, and even for many kinds of software developement, but I'd very much rather have that hardware run an environment that I can trust like Debian (or another traditional GNU/Linux distribution) rather than the phone based ones where, among other problems, there is no clear distinction between what is local and trustable and what is remote and under somebody else's control.

In theory, it would be perfectly possible to run Debian on most tablet and tablet-like hardware, and have such an environment; in practice this is hard for a number of reasons including the lack of mainline kernel support for most hardware and the way actually booting a different OS on it usually ranges from the quite hard to the downright impossible.

Luckily, there is some niche hardware that uses tablet/phone SoCs but is sold with a GNU/Linux distribution and can be used as a freedom respecting work environment on-the-go: my current setup includes an OpenPandora https://en.wikipedia.org/wiki/Pandora_(console) (running Angstrom + a Debian chroot) and an Efika MX Smartbook https://en.wikipedia.org/wiki/Efika, but they are both showing their age badly: they have little RAM (especially the Pandora), and they aren't fully supported by a mainline kernel, which means that you're stuck on an old kernel and dependent on the producer for updates (which for the Efika ended quite early; at least the Pandora is still somewhat supported, at least for bugfixes).

Right now I'm looking forward to two devices as a replacement: the DragonBox Pyra https://en.wikipedia.org/wiki/DragonBox_Pyra (still under preorders) and the THERES-I laptop kit https://www.olimex.com/Products/DIY%20Laptop/ (hopefully available for sale "in a few months", and with no current mainline support for the SoC, but there is hope to see it from the sunxi community http://linux-sunxi.org/Main_Page).

As for software, the laptop/clamshell designs means that using a regular Desktop Environment (or, in my case, Window Manager) works just fine; I do hope that the availability of Pyra (with its touchscreen and 4G/"phone" chip) will help to give a bit of life back to the efforts to improve mobile software on Debian https://wiki.debian.org/Mobile

Hopefully, more such devices will continue to be available, and also hopefully the trend for more openness of the hardware itself will continue; sadly I don't see this getting outside of a niche market in the next few years, but I think that this niche will remain strong enough to be sustainable.

P.S. from nitpicker-me: David Boddie mentions the ability to easily download sources for any component with <key>apt-get source</key>: the big difference IMHO is given by <key>apt-get build-dep</key>, which also install every dependency needed to actually build the code you have just downloaded.

P.S.2: I also agree with Davide Boddie that supporting Conservancy https://sfconservancy.org/supporter/ is very important, and there are still a few hours left to have the contribution count twice.

Saturday, 11 February 2017

Platform Studies

David Boddie - Updates (Full Articles) | 17:49, Saturday, 11 February 2017

In reaching for a catchy title for this article, I chose one that references a particular series of books. This was not completely unintentional since I was already aware that one of the blogs I follow is written by an author of a book in that series. While my reflections are not really about how computing platforms influence the creative work users do on them, perhaps the article that inspired this one can be classified as a platform study of sorts.

Android – Baseline Computing for the 2010s

I was interested to read a recent article by Henri Bergius about his experiences using Android as a work platform. For many users, Android has become a practical alternative for many kinds of work and provides a simple and familiar user interface for non-technical users. It is now also very common, becoming the mainstream computing platform for anyone not tied to legacy applications running on a grey box in some office somewhere. It is at this point we can compare it to the platform it is replacing in many areas of work and life: Microsoft Windows.

In terms of its ubiquity, Android certainly seems to be the new Windows, but it's also worth thinking about whether it is the heir to the Windows dynasty in other ways. A lot could be said about whether Google controls the Android platform as strictly as Microsoft controls Windows, and whether the manufacturers of Android hardware are beholden to Google in the same way that PC vendors were to Microsoft back in the 1990s. However, for technical users, particularly those from a Free Software background, another interesting question arises: how Free/Libre or even Open is the software running on your phone or tablet? Is there even a way to check?

In principle, the software running on your phone is Open in the sense that the kernel is Free Software – or it was when the vendor or one of their contractors downloaded a kernel from kernel.org or elsewhere – and many of the user-space components running on it were originally supplied under Free Software licenses. However, while it may be possible to download the source code for a kernel and, if you're lucky, some other components from the vendor's website, there's no guarantee that what you get is the same as what was supplied on the device itself. Actually building and installing a replacement kernel, operating system or other components may be an insurmountable struggle, particularly given the scale of the software stack being used and the locked-down nature of many devices.

The amount of effort required to verify and replace part or all of a software stack may be given as an excuse for why users should not need to do it. Why bother the users with this, making the product complicated for vendors to support, if they will need a build farm to realistically have a chance of updating their phones themselves? But this is a symptom of the monolithic way that Android is deployed – a limitation on freedom caused by technical shortcomings in the way a product is delivered, or perhaps just the result of making it convenient for vendors to put a product together quickly. Couldn't we have a system that was designed with Software Freedom in mind, rather than something we have to chase up afterwards?

Freedom by Increments

One thing that's missing from Android is the ability to easily get the source code of the programs running on a device. If you use F-Droid (ideally on Replicant instead of Android) you can usually find a convenient link to the source code of an application, typically in a repository, but that's about the limit of the openness and transparency you can expect. It certainly reminds me of what it was like to use a closed, proprietary operating system where freedom is a third party add-on.

On the Debian operating system it is possible and convenient to get the source of practically every running component simply by opening a console and typing apt-get source for whatever it is I want to look at. Other GNU/Linux distributions offer similar tools. The Sugar learning environment goes a step further, allowing the user to view the source code of the activity they are running. Of course, it helps that many of the Sugar activities are written in the Python language. Ironically, there are applications that provide Debian environments for Android devices, adding a layer of freedom to closed devices but failing to address the underlying limitations of the system they run on.

So, while Android is a step forward for many users, it is a step backward for those of us who have come to expect a level of freedom and transparency in our computing platforms. Android is seen by many as a victory for Open Source software since now Linux, the kernel, is everywhere, but this doesn't automatically translate to a victory for Software Freedom. It doesn't matter much if the new mainstream computing platform is in some sense more open than the old one if it doesn't give users more freedom in a practical sense.

I didn't start this article with this in mind but, on a related note, I should mention that Software Freedom Conservancy is still looking for supporters to help fund their efforts. I suppose this does tie in nicely with some of the themes of the article, and it also turns out that Sugar Labs are a Conservancy member project. I renewed my support for Conservancy at the end of last year because I think what they do is worthwhile, greatly needed and, unfortunately, often under-appreciated.

Categories: Free Software, Android

Friday, 10 February 2017

Working on an Android tablet, 2017 edition

Henri Bergius | 00:00, Friday, 10 February 2017

Back in 2013 I was working exclusively on an Android tablet. Then with the NoFlo Kickstarter I needed a device with a desktop browser. What followed were brief periods working on a Chromebook, on a 12” MacBook, and even an iPad Pro.

But from April 2016 onwards I’ve been again working with an Android device. Some people have asked me about my setup, and so here is an update.

Information technology

Why work on a tablet?

When I started on this path in 2013, using a tablet for “real work” was considered crazy. While every story on tablet productivity still brings out the people claiming it is not a real computer for real work, using tablets for real work is becoming more and more common.

A big contributor to this has been the plethora of work-oriented tablets and convertibles released since then. Microsoft’s popular Surface Pro line brought the PC to tablet form factor, and Apple’s iPad Pro devices gave the iPad a keyboard.

Here are couple of great posts talking about how it feels to work on an iPad:

With all the activity going on, one could claim using a tablet for work has been normalized. But why work on a tablet instead of a “real computer”? Here are some reasons, at least for me:

Free of legacy cruft

Desktop operating systems have become clunky. Window management. File management. Multiple ways to discover, install, and uninstall applications. Broken notification mechanisms.

With a tablet you can bypass pretty much all of that, and jump into a simpler, cleaner interface designed for the modern connected world.

I think this is also the reason driving some developers back to Linux and tiling window managers — cutting manual tweaking and staying focused.

Amazing endurance

Admittedly, laptop battery life has increased a lot since 2013. But with some manufacturers using this an excuse to ship thinner devices, tablets still win the endurance game.

With my current work tablet, I’m customarily getting 12 or more hours of usage. This means I can power through the typical long days of a startup founder without having to plug in. And when traveling, I really don’t have to care where power sockets are located on trains, airplanes, and conference centers.

Low power usage also means that I can really get a lot of more runtime by utilizing the mobile battery pack I originally bought to use with my phone. While I’ve never actually had to try this, back-of-the-envelope math claims I should be able to get a full workweek from the combo without plugging in.

Work and play

The other aspect of using a tablet is that it becomes a very nice content consumption device after I’m done working. Simply disconnect the keyboard and lean back, and the same device you used for writing software becomes a great e-reader, video player, or a gaming machine.

Livestreaming a SpaceX launch

This combined with the battery life has meant that I’ve actually stopped carrying a Kindle with me. While an e-ink screen is still nicer to read, not needing an extra device has its benefits, especially for a frequent one-bag traveller.

The setup

I’m writing this on a Pixel C, a 10.2” Android tablet made by Google. I got the device last spring when there were developer discounts available at ramp-up to the Android 7 release, and have been using it full-time since.

Software

My Android homescreen

Surprisingly little has changed in my software use since 2013 — I still spend the most of the time writing software in either Flowhub or terminal. Here are the apps I use on daily basis:

Looking back to the situation in early 2013, the biggest change is that Slack has pretty much killed work email.

Termux is a new app that has done a lot to improve the local development situation. By starting the app you get a very nice Linux chroot environment where a lot of software is only a quick apt install away.

Since much of my non-Flowhub work is done in tmux and vim, I get the exactly same working environment on both local chroot and cloud machines by simply installing my dotfiles on each of them.

Keyboard

Laptop tablet

When I’m on the road I’m using the Pixel C keyboard. This doubles as a screen protector, and provides a reasonable laptop-like typing environment. It attaches to the tablet with very strong magnets and allows a good amount of flexibility on the screen angles.

However, when stationary, no laptop keyboard compares to a real mechanical keyboard. When I’m in the office I use a Filco MiniLa Air, a bluetooth keyboard with quiet-ish Cherry MX brown switches.

Desktop tablet

This tenkeyless (60%) keyboard is extremely comfortable to type on. However, the sturdy metal case means that it is a little too big and heavy to carry on a daily basis.

In practice I’ve only taken with me when there has been a longer trip where I know that I’ll be doing a lot of typing. To solve this, I’m actually looking to build a more compact custom mechanical keyboard so I could always have it with me.

Comparison with iOS

So, why work on Android instead of getting an iPad Pro? I’ve actually worked on both, and here are my reasons:

  • Communications between apps: while iOS has extensions now, the ability to send data from an app to another is still a hit-or-miss. Android had intents from day one, meaning pretty much any app can talk to any other app
  • Standard charging: all of my other devices charge with the same USB-C chargers and cables. iPads still use the proprietary Lightnight plug, requiring custom dongles for everything
  • Standard accessories: this boils down to USB-C just like charging. With Android I can plug in a network adapter or even a mouse, and it’ll just work
  • Ecosystem lock-in: we’re moving to a world where everything — from household electronics to cars — is either locked to the Apple ecosystem or following standards. I don’t want to be locked to a single vendor for everything digital
  • Browser choice: with iOS you only get one web renderer, the rather dated Safari. On Android I can choose between Chrome, Firefox, or any other browser that has been ported to the platform

Of course, iOS has its own benefits. Apple has a stronger stance on privacy than Google. And there is more well-made tablet software available for iPads than Android. But when almost everything I use is available on the web, this doesn’t matter that much.

The future

Hacking on the c-base patio

As a software developer working on Android tablets, the weakest point of the platform is still that there are no browser developer tools available. This was a problem in 2013, and it is still a problem now.

From my conversations with some Chrome developers, it seems Google has very little interest in addressing this. However, there is a bright spot: the new breed of convertible Chromebooks being released now. And they run Android apps:

Chrome OS is another clean, legacy free, modern computing interface. With these new devices you get the combination of a full desktop browser and the ability to run all Android tablet software.

The Samsung Chromebook Pro/Plus mentioned above is definitely interesting. A high-res 12” screen and a digital pen which I see as something very promising for visual programming purposes.

However, given that I already have a great mechanical keyboard, I’d love a device that shipped without an attached keyboard. We’ll see what kind of devices get out later this year.

Friday, 03 February 2017

Proud to be here

free software - Bits of Freedom | 21:04, Friday, 03 February 2017

I'm often very proud of the work we do in the FSFE but at no time do I feel as strongly about this as I do during the FOSDEM weekend. While the weekend has yet to start, the FSFE teams have taken to Brussels with a range of activities.

For myself, the morning started with a meeting with our colleagues at Openforum Europe followed by what is becoming a joint tradition: a pre-FOSDEM European Software Freedom Policy Meeting. Or just Esfpm as I'd like to call it (Mirko calls it the PFESFPM but I think he's wrong).

This year I had the pleasure of moderating this animated group of software freedom and open standards advocates and officials. We had people participating from at least Estonia, Portugal, Greece, United Kingdom, Italy, Belgium, Germany, France and Sweden.

While we were wrapping up the meeting, our booth crew transported large amounts of boxes from our volunteer Maël, unpacked them and setup our exhibition booth at the ULB where FOSDEM is traditionally held. I imagine them being knee deep in "There is no cloud" t-shirts as I headed to the next meeting happening simultaneously: a European Coordinators Meeting.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

And, to top it off, while all of this was happening, parts of our legal team were at a different workshop and meeting during the day and our system administrator Albert was at the traditional pre-FOSDEM PostgreSQL day.

And tomorrow -- tomorrow! -- is when the real fun starts.

Wednesday, 01 February 2017

Going to FOSDEM, Brussels this weekend

DanielPocock.com - fsfe | 09:07, Wednesday, 01 February 2017

This weekend I'm going to FOSDEM, one of the largest gatherings of free software developers in the world. It is an extraordinary event, also preceded by the XSF / XMPP Summit

For those who haven't been to FOSDEM before and haven't yet made travel plans, it is not too late. FOSDEM is a free event and no registration is required. Many Brussels hotels don't get a lot of bookings on weekends during the winter so there are plenty of last minute offers available, often cheaper than what is available on AirBNB. I was speaking to somebody in London on Sunday who commutes through St Pancras (the Eurostar terminal) every day and didn't realize it goes to Brussels and only takes 2 hours to get there. One year I booked a mini-van at the last minute and made the drive from the UK with a stop in Lille for dinner on the way back, for 5 people that was a lot cheaper than the train. In other years I've taken trains from Switzerland through Paris or Luxembourg.

Real-time Communication (RTC) dev-room on Saturday, 4 February

On Saturday, we have a series of 23 talks about RTC topics in the RTC dev-room, including SIP, XMPP, WebRTC, peer-to-peer (with Ring) and presentations from previous GSoC students and developers coming from far and wide.

The possibilities of RTC with free software will also be demonstrated and discussed at the RTC lounge in the K building, near the dev-room, over both Saturday and Sunday. Please come and say hello.

Please come and subscribe to the Free-RTC-Announce mailing list for important announcements on the RTC theme and join the Free-RTC discussion list if you have any questions about the activities at FOSDEM, dinners for RTC developers on Saturday night or RTC in general.

Software Defined Radio (SDR) and the Debian Hams project

At 11:30 on Saturday I'll be over at the SDR dev-room to meet other developers of SDR projects such as GNU Radio and give a brief talk about the Debian Hams project and the relationship between our diverse communities. Debian Hams (also on the Debian Ham wiki) provides a ready-to-run solution for ham radio and SDR is just one of its many capabilities.

If you've ever wondered about trying the RTL-SDR dongle or similar projects Debian Hams provides a great way to get started quickly.

I've previously given talks on this topic at the Vienna and Cambridge mini-DebConfs (video).

Ham Radio (also known as amateur radio) offers the possibility to gain exposure to every aspect of technology from the physical antennas and power systems through to software for a range of analog and digital communications purposes. Ham Radio and the huge community around it is a great fit with the principles and philosophy of free software development. In a world where hardware vendors are constantly exploring ways to limit their users with closed and proprietary architectures, such as DRM, a broad-based awareness of the entire technology stack empowers society to remain in control of the technology we are increasingly coming to depend on in our every day lives.

Tuesday, 31 January 2017

ACLU

Bits from the Basement | 22:42, Tuesday, 31 January 2017

When I was younger, and worked in an "old HP" test and measurement division, I sometimes sat at lunch in the cafeteria with a group of older co-workers who I grew to have immense respect for. They told great stories. I learned a lot of practical electronics from them... and other things too.

Each carried on their person a copy of the US Constitution and Bill of Rights, and most also had a "concealed carry" permit, which they would refer to as their "redneck license". I quickly learned that they weren't all gun fanatics... at that time, the vetting process for such a permit was a bit daunting, and having one was their way of "proving" that they were honest, law-abiding citizens. Citizens who knew their rights. Who enjoyed debating boundary conditions in those rights inspired by current events at the lunch table. I miss those guys and those conversations.

I mention this because it's one of those things that I realize now had a significant formative impact on my adult values and world view. Freedom matters. That's why, despite my long-standing appreciation for and support of the organization's activities, I'm embarrassed to admit that it wasn't until this week that I personally joined the American Civil Liberties Union and sent them a donation.

Sunday, 29 January 2017

Preseeding a debian installation on a libreboot computer

Elena ``of Valhalla'' | 10:30, Sunday, 29 January 2017

Preseeding a debian installation on a libreboot computer

Preseeding a debian installation from the standard installer is as easy as pressing ESC at the right time and pointing it to the url of your preseed file https://wiki.debian.org/DebianInstaller/Preseed#Loading_the_preseeding_file_from_a_webserver, right?

It is, except when you're using libreboot, and you never pass through that “right time”, because you are skipping the installer's grub.

So, for future reference, here is the right incantation to use at the command line that you get by pressing c at the libreboot menu:


linux (usb0)/install.amd/vmlinuz auto=true url=http://webserver/path/preseed.cfg
initrd (usb0)/install.amd/initrx
boot


simple, once you've found it...

(ok, it took me less than one hour, but I don't want it to take another hour the next time)

#coreboot #libreboot #debian #preseed

Thursday, 26 January 2017

Worked for us: Thank you 33C3

English Planet – Dreierlei | 21:30, Thursday, 26 January 2017

Summary: A report of the FSFE assembly and activity during the 33rd edition of the Chaos Communication Congress (CCC), in short “33C3”. It is mainly a visual report along some pictures.

I am happy to see our assembly growing every year and having the possibility to bring our message of Software Freedom to the people at the Chaos Communication Congress (CCC) is priceless. The CCC is Germany’s biggest annual meetup of hackers and political activists that share knowledge concerning the most burning issues in the Internet like data retention and data leeches, hatespeech, whistleblowing or space travel.

What started a few years ago with a single table, some leaflets, Dominic, Eike and me, now has grown into an assembly with 12 members and 21 sessions in three days. Being the host for likeminded organisations, host of noGame and offering workshops, workspace, get together, Free-Software-Song sing-along sessions …

But, as promised in the summary, I will let some pictures speak from here on. If you are interested in more information about our sessions, people and content, get it at the FSFE assembly’s 33c3-wiki-page.

FSFE’s assembly:

<figure class="wp-caption aligncenter" id="attachment_1938" style="width: 580px"><figcaption class="wp-caption-text">CCH turned into CCC</figcaption></figure>

<figure class="wp-caption aligncenter" id="attachment_1932" style="width: 580px"><figcaption class="wp-caption-text">The FSFE assembly in its mobile version … thanks to the great interest, we happily had just had to bring one carton back</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1960" style="width: 580px"><figcaption class="wp-caption-text">Made it work: Olga Gkotsopoulou, our head of booth</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1933" style="width: 580px"><figcaption class="wp-caption-text">Version 0.9 of the assembly</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1935" style="width: 580px"><figcaption class="wp-caption-text">noGame at the FSFE assembly</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1936" style="width: 580px"><figcaption class="wp-caption-text">One of our Free Software song sing-along sessions with conductor, flutes, guitar … and love</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1937" style="width: 580px"><figcaption class="wp-caption-text">No party without I love Free Software balloon</figcaption></figure>

Our diverse sessions:

<figure class="wp-caption aligncenter" id="attachment_1943" style="width: 580px"><figcaption class="wp-caption-text">Max Mehl about “Routerzwang und Funkabschottung – und was Aktivisten davon lernen können” in the main program of 33C3</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1968" style="width: 580px"><figcaption class="wp-caption-text">Greta Doçi and Daniel Kinzler fill up a room with their session about Wikidata query and visualization</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1940" style="width: 580px"><figcaption class="wp-caption-text">Crowded room during What makes a secure mobile messenger? by Hannes Hauswedell</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1947" style="width: 580px"><figcaption class="wp-caption-text">Eileen Wagner and Elisa Lindinger, from the Open Knowledge Foundation DE, presenting the Prototype Fund that offers 1.2 Million Euros for Free Software Projects</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1945" style="width: 580px"><figcaption class="wp-caption-text">Volker Birk explains Privacy by default with pretty Easy privacy (p≡p)</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1942" style="width: 580px"><figcaption class="wp-caption-text">Katharina Nocun gives an outlook to FSFE’s new campaign “Public Money Public Code” during Arne Semrotts FragDenStaat-session</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1941" style="width: 580px"><figcaption class="wp-caption-text">Hanno Böck, free journalist, discusses Are decentralized services unable to innovate? with a full-packed room.</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1946" style="width: 580px"><figcaption class="wp-caption-text">Sam Tuke, CEO of phpList, explains methods and best practice for business in Free Software</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1939" style="width: 580px"><figcaption class="wp-caption-text">Greta Doçi and Jan-Christoph Borchardt in their session about Gender diversity in the Free Software community</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_1944" style="width: 580px"><figcaption class="wp-caption-text">Chris Schabesberger introduces NewPipe, a Free Youtube/Streaming app for Android</figcaption></figure>

Related posts:

Tuesday, 24 January 2017

Working with free software

agger's Free Software blog | 21:09, Tuesday, 24 January 2017

In 2012, I founded an FSFE local group in Aarhus. The intention was clear, I wanted to create a forum in Denmark for communicating politically about free software. There was and is a dire need for this – in a day and age where computers and computing become ever more pervasive, it is beginning to seem ridiculous that anyone can leave secondary school without at least a notion of the meaning of the GPL.

We got off to a good start with some quite successful meetings. However, in the course of 2014 and our campaign against the unitary patent and the EU patent court, I noticed myself becoming tired – and I realized that the group had still not accumulated enough momentum that the meetings would continue without me to drive the work. As a result we more or less folded in the course of 2015, with that year’s LibreOffice conference as the group’s final effort and call to arms.

So what happened? Well, for one thing, as a result of my interest for Bricolabs and the Dyne project I became involved in the Brazilian-based technoshamanism network, in the end co-organizing the second international festival in November. Obviously, all of that took its toll on my spare time.

However, that was not the most important reason. The most important reason is my day job. In my day job I work with free software – all the time. Specifically, I work as a free software developer, so my working hours are spent either programming new free software, fixing bugs or discussing the technical architecture of future projects. This is not, of course, the same thing as working politically to increase people’s understanding of the necessity of software freedom, but it’s close enough to make it difficult, at least for me, to dedicate large swathes of my spare time to software also – after all, there are other things in life. To boot, I’m also involved as a volunteer programmer in the Baobáxia project, and my activity in that project definitely also suffers from my day job.

In a way, my present day job is a realization of the dream I had when I first realized the importance of software freedom, namely one day to be able to sustain myself by creating software under free licenses only – and since my company is an increasingly important supplier to the Danish public sector we are, as a matter of fact, furthering the cause of software freedom, though from a professional and commercial angle – supplying actual software – rather than from a political and philosophical one. Which is my reason for writing this and future posts about our work in free software: To share a bit about how free software and software freedom actually play out in a real-world setting.

The first thing I’d like to make clear is that when you’re selling free software to a customer, you’re not really selling “free software” and definitely not selling software freedom – you’re selling software. That’s not to say that the customer doesn’t know that your software is free and doesn’t care, but it is to say that the customer is working in an organization that needs some work done – a functioning web site, a dictionary with adequate performance, a well-designed web app – and will normally focus a lot more on getting something that works than on the license conditions. If they understand software freedom, they may err on the side of getting the “open source” solution, but if a proprietary vendor is significantly better and cheaper than you, you’re probably out.

Secondly, that means that working with creating free software for actual customers is very much about delving into topics that are specific to your customer’s domain. What functions on the audience PCs does the librarian need to be able to control from the GNU/Linux remote admin system that we wrote? How are the mindbogglingly complicated standards behind the Danish government’s standardized data services to be interpreted, and how much domain knowledge do we need in order to understand the customer’s demands? And so on …

In this and future posts, I’d like to tell a bit about how all of this plays out in our daily work at Magenta, the company that I work for. Magenta is, with its approximately 20 employees, the largest company in Denmark which is completely specialized in delivering “open source” software. The company is, as will be understood, not rigidly “free software” oriented, but its mission statement does say that its purpose is “to deliver open source software”, with “open source software” being defined as such software that is under an OSI-approved license. This basically means that our company is unable to deliver software under a non-free license to anyone. In reality, all of our software is released to the client under the GPL, the LGPL or the Mozilla Public License. As I said, in future posts I will try to share what it means to work with and deliver free software under these conditions, and what it means and doesn’t mean for the prospects of software freedom in Denmark.

Should people with serious mental illness be able to run for President?

Wunderbar Emporium | 19:08, Tuesday, 24 January 2017

Most of you know that I usually do not blog about political issues, but this one is serious. With Donald Trump, America has faced a president with a serious mental illness called narcissistic personality disorder (NPD). I have been working and living together with people with all kind of mental illnesses and I know how hard this is if you are in a position of colleague, friend or partner. It might eat up all your energy and this also happened to me. NPD might not be the worst one, but it’s at least the most dangerous if someone like this is in a leading position.

N Ziehl already wrote a good blog post on this specific one. You can find it here. Imho it’s definitely a must read.

Monday, 23 January 2017

OMEMO

vanitasvitae's blog » englisch | 16:18, Monday, 23 January 2017

Recently there was a lot of news coverage of an alleged „backdoor“ in WhatsApp, the proprietary messaging application owned by Facebook. WhatsApp deployed OpenWhisperSystem’s Signal-protocol roughly a year ago. Now a researcher showed, that WhatsApp’s servers are able to register a new device key for a user, so that messages that the user did not read yet (the ones with only one checkmark) are re-encrypted for the new key, so they can be read by WhatsApp (or whoever registered the key). There were a lot of discussions going on about whether this is a security flaw, or careful design.

I also read a lot of articles suggesting alternatives to WhatsApp. Often mentioned was of course Signal, a free open source messenger by OpenWhisperSystems, the creators of the Signal-protocol, which does not suffer from WhatsApps “vulnerability”. Both WhatsApp and Signal share one major flaw: Both create a “walled garden” for their users. That means that you can only write WhatsApp messages to other WhatsApp users. Same goes for Signal. Since Signal depends on proprietary Google libraries, it cannot be used on mobile phones without Google Play services.

Every now and then the news mention another alternative, the XMPP network.

Conversations is a free libre XMPP client for Android, which introduced the OMEMO protocol for end-to-end encryption roughly two years ago. OMEMO is basically the Signal-protocol adapted to XMPP. Since there are many different XMPP servers that can be used with many different clients, the user has a choice, which software they want to use to communicate to their friends. The issue is, there are not too many clients supporting OMEMO at the moment. But what clients are able to do OMEMO at the moment?

For Android there is Conversations of course and very recently ChatSecure for iOS was released in version 4, which brought OMEMO support. So it looks good on the mobile front (Sorry WindowsPhone).

For the desktop there is Gajim, an XMPP client written in python, which offers OMEMO support as a plugin. This works well on Linux and Windows. I admit, this is not a lot compared to OTR or GPG – but wait, there is more ;)

Currently I am writing on my bachelors thesis about the OMEMO protocol. As part of this, I am working on a Smack module that hopefully will enable messenger apps based on the Smack library (eg. Xabber, Zom, Jitsi, Kontalk…) to encrypt messages with OMEMO.

Simultaneously another student is developing a Pidgin plugin and yet another one is implementing OMEMO for the console based XMPP client Profanity. You can find a quick overview of the state of OMEMO deployment on https://omemo.top.

Vanitasvitae

Sunday, 22 January 2017

New pajama

Elena ``of Valhalla'' | 15:43, Sunday, 22 January 2017

New pajama

I may have been sewing myself a new pajama.

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/81b600789aa02a91fdf62f54a71b1ba0

It was plagued with issues; one of the sleeve is wrong side out and I only realized it when everything was almost done (luckily the pattern is symmetric and it is barely noticeable) and the swirl moved while I was sewing it on (and the sewing machine got stuck multiple times: next time I'm using interfacing, full stop.), and it's a bit deformed, but it's done.

For the swirl, I used Inkscape to Simplify (Ctrl-L) the original Debian Swirl a few times, removed the isolated bits, adjusted some spline nodes by hand and printed on paper. I've then cut, used water soluble glue to attach it to the wrong side of a scrap of red fabric, cut the fabric, removed the paper and then pinned and sewed the fabric on the pajama top.
As mentioned above, the next time I'm doing something like this, some interfacing will be involved somewhere, to keep me sane and the sewing machine happy.

Blogging, because it is somewhat relevant to Free Software :) and there are even sources https://www.trueelena.org/clothing/projects/pajamas_set.html#downloads, under a DFSG-Free license :)

Thursday, 19 January 2017

Which movie most accurately forecasts the Trump presidency?

DanielPocock.com - fsfe | 19:31, Thursday, 19 January 2017

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

Friday, 13 January 2017

Modern XMPP Server

Elena ``of Valhalla'' | 12:59, Friday, 13 January 2017

Modern XMPP Server

I've published a new HOWTO on my website 'http://www.trueelena.org/computers/howto/modern_xmpp_server.html':

http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


How



I've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.

I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites



You will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.

On your firewall, you'll need to open the following TCP ports:


  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)



The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).

prosody configuration



You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:


c2s_require_encryption = true
s2s_secure_auth = true



and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:


s2s_insecure_domains = { "gmail.com" }


virtualhosts



For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:


VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";
}


For the domains where you also want to enable MUCs, add the follwing lines:


Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"


the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):


Component "upload.chat.trueelena.org" "http_upload"

The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules



Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


"something";

Most of these come from the prosody-modules package (and thus from https://modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.



  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.



  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.



  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.



  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.



  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.




@Gruppo Linux Como @LIFO

Wednesday, 11 January 2017

Standing Strong as a Team

Wunderbar Emporium | 15:11, Wednesday, 11 January 2017

I work at a large university in Switzerland. We are also responsible for Online Exams which are held on our GNU/Linux setup. As during exams bad technical things could possibly happen we have to be available. Since the last year we have to be located in the same building as where exams are held (which is not my regular office building).

This has always been stressful to me as well, because hard crashes of the infrastructure will have a large impact. If everything goes wrong it might affect up to 200 – 500 students.

Today we had one of those exams and we have upped our team presence to four people in 3rd level as well as 2nd and 1st level. Everything went well and it gave me a really good feeling that we are such a strong team and can rely on each other.

So what’s the conclusion of this post? I have learned that things that might look frightening at first view, might work really smooth if you got colleagues on which you can rely on. Things that might look impossible for one alone are possible with a the help of a good team and friends.

Push Free Software and Open Science for Horizon2020

English Planet – Dreierlei | 12:34, Wednesday, 11 January 2017

Summary: please help us to get the idea about the importance of Free Software as a condition for Open Science into the mind of stakeholders and decision-takers of the Horizon2020 program. You can do so by participating in the interim evaluation and re-using FSFE’s position paper.

What came to my mind the first times that I read “Open Science” was that this term should not be necessary in the first place. In common understanding as well as in its self-conception, “openness” is the elementary part of all science. “Openness” in a sense that all scientific results shall be published publicly along with experimental settings, methods and anything else that leads to their results. It is exactly this approach that – in theory – gives everyone the chance to reproduce the experiment and get to the same results.

But although this approach of openness might still be the noble objective of any scientist, the general idea of a publicly available science is called into question since at least the de-facto domination of publishers over science journals and the creation of a profit-oriented market-design. It cannot be the point of this blogpost to roll out the problematic situation in that nowadays the consumers and the content creators both have to pay publishers for overpriced science journals, financed with public money. Instead, at this point, most important is that these high prices are contrary to the idea of universal access to science as they give access only to those who can afford it.

<figure class="wp-caption aligncenter" id="attachment_1889" style="width: 300px"><figcaption class="wp-caption-text">Send and receive Open Science?</figcaption></figure>

Fortunately, Open Access came up to do something about this problem. Similar to Free Software, Open Access uses free licenses to offer access to science publications to everyone around the globe. That is why Open Access is an important step towards the universal access of science. Unfortunately, in a digital world, Open Access is just one of many tools that we have to use to achieve an Open Science. Equally important is the format and software that is used. Also, Open Access only covers the final publication and misses to cover the steps that lead to there. This is where Open Science steps in.

Why Free Software matters in Open Science

It should be clear that Open Science – unlike Open Access – does not only relate to a publication form of the results of a research. Open Science aims to cover and open up the whole research process from the method design to data gathering to calculations to its final publication. What means that thanks to the digitalization, since some decades we now have a new issue in the opening of the scientific process and this is the software that is used. Software is an integral part of basically all sciences nowadays and nearly all steps involved in a research project are in need and covered by the use of software.

And here is the point: Proprietary software cannot offer the approach of openness that is needed to keep scientific experiments and results transparent and reproducible. Only Free Software offers the possibility to study and reuse the software that was used for the research in question and therewith universal access to science. Only Free Software offers transparency and therewith the possibility to check the methods (e.g. mathematical calculations of the software in use) that have been used to achieve the results. And only Free Software offers collaboration and independence of science and secures long-time archiving of results at the same time. (If you like to dig deeper into the argumentation how Open Science is in need of Free Software, read my previous blogpost on this).

How to push for Free Software in Horizon2020

Horizon2020 is the biggest public science funding in the European Union, run by the European Commission. Fortunately, Open Science is one of the main principles promoted by Horizon2020 to further unlock the full potential of innovation in Europe. Currently, Horizon2020 is running an interim evaluation to help formulate the next EU research and innovation funding post-2020. So this is the best moment in time to raise awareness about the importance of Free Software and Open Standards for Open Science for the next funding period.

<figure class="wp-caption aligncenter" id="attachment_1920" style="width: 300px">Horizon2020 logo<figcaption class="wp-caption-text">Let’s put Free and Open in Horizon 2020</figcaption></figure>

To get this message and idea to the decision-takers inside the European Commission, at the FSFE we wrote a position paper why Free Software matters for Open Science including concrete proposals and best methods of how to implement Free Software into the Horizon2020 framework. To further support our demands we additionally filed in a Freedom of Information request to the European Commission’s Directorate-General for Research and Innovation to ask about the use, development and release of (Free) software under Horizon2020.

If you are convinced now, please help us to get the idea about the importance of Free Software as a condition for Open Science into the mind of stakeholders and decision-takers of the Horizon2020 program. You can do so by participating in the consultation and re-using FSFE’s position paper (PDF). Literally, you can fill in your personal details, skip all questions and in the end upload FSFE’s position paper. This is the 5-minute-stop and we explained it for you in our wiki. Or you read through our position paper, take it as an inspiration and you upload a modified version of it. Please, find all information necessary including links to the sources in FSFE’s wiki.

Thank you very much for helping to open up European science by using Free Software!

Related articles:

Boomerang for Mutt

free software - Bits of Freedom | 08:15, Wednesday, 11 January 2017

Boomerang for Mutt

If you're anything like me, an overflowing inbox stresses you. When the emails in my inbox start filling more than a screen, I lose focus. This is particularly troubling as I'm also using my inbox as a reminder about issues I need to look at: travel bookings, meetings to be booked, inquiries to be made at the right time, and so on. A lot of emails about things I don't actually need to do anything about right now.

Last week, I was delighted to find Noah Tilton having created a convenient tool for a Tickler file (as popularized by the book Getting things Done by David Allen) for Maildir MUAs such as Mutt. If you're a bit more Gmail-snazzy, you may recognize the same concept from Boomerang for Gmail.

The basic ideas is this: not everything in your INBOX needs to be acted on right now. The things which don't need to be acted on now take attention away from what you should be doing. So you want to move them away, but have them appear again at a particular time: next week, in December, tomorrow, or so.

What Noah's tickler-mail does is it allows you to keep one Maildir folder as a tickler file. I call this one todo, so everything in ~/Maildir/todo/ is my tickler file. Within this tickler file, I save emails in a subfolder with a human-readable description, such as "next-week", "next-month", or "25th-of-december" (~/Maildir/todo/next-week/, ~/Maildir/todo/next-month/, ~/Maildir/todo/25th-of-december).

The tickler-mail utility then parses these human-readable descriptions with Python's parsedatetime library and compares the result with the change timestamp of the email. If the indicated amount of time has passed, it moves the email to the INBOX where and adds an additional "X-Tickler" header to it to indicate it's a tickler file email. With this .muttrc recipe, the e-mail is then shown clearly in red so I can easily and quickly tell which are new and which have come in from the tickler file:

color index red black '~h "X-Tickler:.*"'

Monday, 09 January 2017

The Academic Barriers of Commercialisation

Paul Boddie's Free Software-related blog » English | 22:27, Monday, 09 January 2017

Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.

But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.

Gatekeepers of Knowledge

However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.

Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.

So, we get the sales pitch about new things needing investment…

However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.

If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.

But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:

That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.

Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.

The Rewards of Labour

It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.

So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.

One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.

A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:

This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.

Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.

Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.

The Dirty Word

At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.

Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.

Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.

But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.

Removing the Barriers

It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.

As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)

Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.

In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:

If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.

In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.

And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.

Divide and Rule

It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.

A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.

Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.

Sunday, 08 January 2017

Bootstrapping Haskell: part 1

Rekado | 23:00, Sunday, 08 January 2017

Haskell is a formally specified language with potentially many alternative implementations, but in early 2017 the reality is that Haskell is whatever the Glasgow Haskell Compiler (GHC) implements. Unfortunately, to build GHC one needs a previous version of GHC. This is true for all public releases of GHC all the way back to version 0.29, which was released in 1996 and which implements Haskell 1.2. Some GHC releases include files containing generated ANSI C code, which require only a C compiler to build. For most purposes, generated code does not qualify as source code.

So I wondered: is it possible to construct a procedure to build a modern release of GHC from source without depending on any generated code or pre-built binaries of an older variant of GHC? The answer to this question depends on the answers to a number of related questions. One of them is: are there any alternative Haskell implementations that are still usable today and that can be built without GHC?

A short survey of Haskell implementations

Although nowadays hardly anyone uses any other Haskell compiler but GHC in production there are some alternative Haskell implementations that were protected from bit rot and thus can still be built from source with today’s common toolchains.

One of the oldest implementations is Yale Haskell, a Haskell system embedded in Common Lisp. The last release of Yale Haskell was version 2.0.5 in the early 1990s.1 Yale Haskell runs on top of CMU Common Lisp, Lucid Common Lisp, Allegro Common Lisp, or Harlequin LispWorks, but since I do not have access to any of these proprietary Common Lisp implementations, I ported the Yale Haskell system to GNU CLISP. The code for the port is available here. Yale Haskell is not a compiler, it can only be used as an interpreter.

Another Haskell interpreter with a more recent release is Hugs. Hugs is written in C and implements almost all of the Haskell 98 standard. It also comes with a number of useful language extensions that GHC and other Haskell systems depend on. Unfortunately, it cannot deal with mutually recursive module dependencies, which is a feature that even the earliest versions of GHC rely on. This means that running a variant of GHC inside of Hugs is not going to work without major changes.

An alternative Haskell compiler that does not need to be built with GHC is nhc98. Its latest release was in 2010, which is much more recent than any of the other Haskell implementations mentioned so far. nhc98 is written in Haskell, so a Haskell compiler or interpreter is required to build it. Like GHC the release of nhc98 comes with files containing generated C code, but depending on them for a clean bootstrap is almost as bad as depending on a third-party binary. Sadly, nhc98 has another shortcoming: it is restricted to 32-bit machine architectures.

An early bootstrapping idea

Since nhc98 is written in C (the runtime) and standard Haskell 98, we can run the Haskell parts of the compiler inside of a Haskell 98 interpreter. Luckily, we have an interpreter that fits the bill: Hugs! If we can interpret and run enough parts of nhc98 with Hugs we might be able to use nhc98 on Hugs to build a native version of nhc98 and related tools (such as cpphs, hmake, and cabal). Using the native compiler we can build a complete toolchain and just maybe that’s enough to build an early version of GHC. Once we have an early version of GHC we can go ahead and build later versions with ease. (Building GHC directly using nhc98 on Hugs might also work, but due to complexity in GHC modules it seems better to avoid depending on Hugs at runtime.)

At this point I have verified that (with minor modifications) nhc98 can indeed be run on top of Hugs and that (with enough care to pre-processing and dependency ordering) it can build a native library from the Haskell source files of the nhc98 prelude. It is not clear whether nhc98 would be capable of building a version of GHC and how close to a modern version GHC we can get with just nhc98. There are also problems with the nhc98 runtime on modern x8664 systems (more on that at the end).

Setting up the environment

Before we can start let’s prepare a suitable environment. Since nhc98 can only be used on 32-bit architectures we need a GCC toolchain for i686. With GNU Guix it’s easy to set up a temporary environment containing just the right tools: the GCC toolchain, make, and Hugs.

guix environment --system=i686-linux \
                 --ad-hoc gcc-toolchain@4 make hugs

Building the runtime

Now we can configure nhc98 and build the C runtime, which is needed to link the binary objects that nhc98 and GCC produce when compiling Haskell sources. Configuration is easy:

cd /path/to/nhc98-1.22
export NHCDIR=$PWD
./configure

Next, we build the C runtime:

cd src/runtime
make

This produces binary objects in an architecture-specific directory. In my case this is targets/x86_64-Linux.

Building the prelude?

The standard library in Haskell is called the prelude. Most source files of nhc98 depend on the prelude in one way or another. Although Hugs comes with its own prelude it is of little use for our purposes as components of nhc98 must be linked with a static prelude library object. Hugs does not provide a suitable object that we could link to.

To build the prelude we need a Haskell compiler that has the same call interface as nhc98. Interestingly, much of the nhc98 compiler’s user interface is implemented as a shell script, which after extensive argument processing calls nhc98comp to translate Haskell source files into C code, and then runs GCC over the C files to create binary objects. Since we do not have nhc98comp at this point, we need to fake it with Hugs (more on that later).

Detour: cpphs and GreenCard

Unfortunately, this is not the only problem. Some of the prelude’s source files require pre-processing by a tool called GreenCard, which generates boilerplate FFI code. Of course, GreenCard is written in Haskell. Since we cannot build a native GreenCard binary without the native nhc98 prelude library, we need to make GreenCard run on Hugs. Some of the GreenCard sources require pre-processing with cpphs. Luckily, that’s just a really simple Haskell script, so running it in Hugs is trivial. We will need cpphs later again, so it makes sense to write a script for it. Let’s call it hugs-cpphs and drop it in ${NHCDIR}.

#!/bin/bash

runhugs ${NHCDIR}/src/cpphs/cpphs.hs --noline -D__HASKELL98__ "$@"

Make it executable:

chmod +x hugs-cpphs

Okay, let’s first pre-process the GreenCard sources with cpphs. To do that I ran the following commands:

cd ${NHCDIR}/src/greencard

CPPPRE="${NHCDIR}/hugs-cpphs -D__NHC__"

FILES="DIS.lhs \
 HandLex.hs \
 ParseLib.hs \
 HandParse.hs \
 FillIn.lhs \
 Proc.lhs \
 NHCBackend.hs \
 NameSupply.lhs \
 Process.lhs"

for file in $FILES; do
    cp $file $file.original && $CPPPRE $file.original > $file && rm $file.original
done

The result is a bunch of GreenCard source files without these pesky CPP pre-processor directives. Hugs can pre-process sources on the fly, but this makes evaluation orders of magnitude slower. Pre-processing the sources once before using them repeatedly seems like a better choice.

There is still a minor problem with GreenCard on Hugs. The GreenCard sources import the module NonStdTrace, which depends on built-in prelude functions from nhc98. Obviously, they are not available when running on Hugs (it has its own prelude implementation), so we need to provide an alternative using just the regular Hugs prelude. The following snippet creates a file named src/prelude/NonStd/NonStdTraceBootstrap.hs with the necessary changes.

cd ${NHCDIR}/src/prelude/NonStd/
sed -e &aposs|NonStdTrace|NonStdTraceBootstrap|&apos \
    -e &aposs|import PreludeBuiltin||&apos \
    -e &aposs|_||g&apos NonStdTrace.hs > NonStdTraceBootstrap.hs

Then we change a single line in src/greencard/NHCBackend.hs to make it import NonStdTraceBootstrap instead of NonStdTrace.

cd ${NHCDIR}/src/greencard
sed -i -e &aposs|NonStdTrace|NonStdTraceBootstrap|&apos NHCBackend.hs

To run GreenCard we still need a driver script. Let’s call this hugs-greencard and place it in ${NHCDIR}:

#!/bin/bash

HUGSDIR="$(dirname $(readlink -f $(which runhugs)))/../"
SEARCH_HUGS=$(printf "${NHCDIR}/src/%s/*:" compiler prelude libraries)

runhugs -98 \
        -P${HUGSDIR}/lib/hugs/packages/*:${NHCDIR}/include/*:${SEARCH_HUGS} \
        ${NHCDIR}/src/greencard/GreenCard.lhs \
        $@

Make it executable:

cd ${NHCDIR}
chmod +x hugs-greencard

Building the prelude!

Where were we? Ah, the prelude. As stated earlier, we need a working replacement for nhc98comp, which will be called by the driver script script/nhc98 (created by the configure script). Let’s call the replacement hugs-nhc, and again we’ll dump it in ${NHCDIR}. Here it is in all its glory:

#!/bin/bash

# Root directory of Hugs installation
HUGSDIR="$(dirname $(readlink -f $(which runhugs)))/../"

# TODO: "libraries" alone may be sufficient
SEARCH_HUGS=$(printf "${NHCDIR}/src/%s/*:" compiler prelude libraries)

# Filter everything from "+RTS" to "-RTS" from $@ because MainNhc98.hs
# does not know what to do with these flags.
ARGS=""
SKIP="false"
for arg in "$@"; do
    if [[ $arg == "+RTS" ]]; then
        SKIP="true"
    elif [[ $arg == "-RTS" ]]; then
        SKIP="false"
    elif [[ $SKIP == "false" ]]; then
        ARGS="${ARGS} $arg"
    fi
done

runhugs -98 \
        -P${HUGSDIR}/lib/hugs/packages/*:${SEARCH_HUGS} \
        ${NHCDIR}/src/compiler98/MainNhc98.hs \
        $ARGS

All this does is run Hugs (runhugs) with language extensions (-98), ensures that Hugs knows where to look for Hugs and nhc98 modules (-P), loads up the compiler’s main function, and then passes any arguments other than RTS flags ($ARGS) to it.

Let’s also make this executable:

cd ${NHCDIR}
chmod +x hugs-nhc

The compiler sources contain pre-processor directives, which need to be removed before running hugs-nhc. It would be foolish to let Hugs pre-process the sources at runtime with -F. In my tests it made hugs-nhc run slower by an order of magnitude. Let’s pre-process the sources of the compiler and the libraries it depends on with hugs-cpphs (see above):

cd ${NHCDIR}
CPPPRE="${NHCDIR}/hugs-cpphs -D__HUGS__"

FILES="src/compiler98/GcodeLowC.hs \
 src/libraries/filepath/System/FilePath.hs \
 src/libraries/filepath/System/FilePath/Posix.hs"

for file in $FILES; do
    cp $file $file.original && $CPPPRE $file.original > $file && rm $file.original
done

The compiler’s driver script script/nhc98 expects to find the executables of hmake-PRAGMA, greencard-nhc98, and cpphs in the architecture-specific lib directory (in my case that’s ${NHCDIR}/lib/x86_64-Linux/). They do not exist, obviously, but for two of them we already have scripts to run them on top of Hugs. hmake-PRAGMA does not seem to be very important; replacing it with cat appears to be fine. To pacify the compiler script it’s easiest to just replace a few definitions:

cd ${NHCDIR}
sed -i \
  -e &apos0,/^GREENCARD=.*$/s||GREENCARD="$NHC98BINDIR/../hugs-greencard"|&apos \
  -e &apos0,/^CPPHS=.*$/s||CPPHS="$NHC98BINDIR/../hugs-cpphs -D__NHC__"|&apos \
  -e &apos0,/^PRAGMA=.*$/s||PRAGMA=cat|&apos \
  script/nhc98

Initially, this looked like it would be enough, but half-way through building the prelude Hugs choked when interpreting nhc98 to build a certain module. After some experimentation it turned out that the NHC.FFI module in src/prelude/FFI/CTypes.hs is too big for Hugs. Running nhc98 on that module causes Hugs to abort with an overflow in the control stack. The fix here is to break up the module to make it easier for nhc98 to build it, which in turn prevents Hugs from doing too much work at once.

Apply this patch:

From 9eb2a2066eb9f93e60e447aab28479af6c8b9759 Mon Sep 17 00:00:00 2001
From: Ricardo Wurmus <rekado@elephly.net>
Date: Sat, 7 Jan 2017 22:31:41 +0100
Subject: [PATCH] Split up CTypes

This is necessary to avoid a control stack overflow in Hugs when
building the FFI library with nhc98 running on Hugs.
---
 src/prelude/FFI/CStrings.hs     |  2 ++
 src/prelude/FFI/CTypes.hs       | 14 --------------
 src/prelude/FFI/CTypes1.hs      | 20 ++++++++++++++++++++
 src/prelude/FFI/CTypes2.hs      | 22 ++++++++++++++++++++++
 src/prelude/FFI/CTypesExtra.hs  |  2 ++
 src/prelude/FFI/FFI.hs          |  2 ++
 src/prelude/FFI/Makefile        |  8 ++++----
 src/prelude/FFI/MarshalAlloc.hs |  2 ++
 src/prelude/FFI/MarshalUtils.hs |  2 ++
 9 files changed, 56 insertions(+), 18 deletions(-)
 create mode 100644 src/prelude/FFI/CTypes1.hs
 create mode 100644 src/prelude/FFI/CTypes2.hs

diff --git a/src/prelude/FFI/CStrings.hs b/src/prelude/FFI/CStrings.hs
index 18fdfa9..f1373cf 100644
--- a/src/prelude/FFI/CStrings.hs
+++ b/src/prelude/FFI/CStrings.hs
@@ -23,6 +23,8 @@ module NHC.FFI (
 
 import MarshalArray
 import CTypes
+import CTypes1
+import CTypes2
 import Ptr
 import Word
 import Char
diff --git a/src/prelude/FFI/CTypes.hs b/src/prelude/FFI/CTypes.hs
index 18e9d60..942e7a1 100644
--- a/src/prelude/FFI/CTypes.hs
+++ b/src/prelude/FFI/CTypes.hs
@@ -4,11 +4,6 @@ module NHC.FFI
 	  -- Typeable, Storable, Bounded, Real, Integral, Bits
 	  CChar(..),    CSChar(..),  CUChar(..)
 	, CShort(..),   CUShort(..), CInt(..),    CUInt(..)
-	, CLong(..),    CULong(..),  CLLong(..),  CULLong(..)
-
-	  -- Floating types, instances of: Eq, Ord, Num, Read, Show, Enum,
-	  -- Typeable, Storable, Real, Fractional, Floating, RealFrac, RealFloat
-	, CFloat(..),   CDouble(..), CLDouble(..)
 	) where
 
 import NonStdUnsafeCoerce
@@ -29,12 +24,3 @@ INTEGRAL_TYPE(CShort,Int16)
 INTEGRAL_TYPE(CUShort,Word16)
 INTEGRAL_TYPE(CInt,Int)
 INTEGRAL_TYPE(CUInt,Word32)
-INTEGRAL_TYPE(CLong,Int32)
-INTEGRAL_TYPE(CULong,Word32)
-INTEGRAL_TYPE(CLLong,Int64)
-INTEGRAL_TYPE(CULLong,Word64)
-
-FLOATING_TYPE(CFloat,Float)
-FLOATING_TYPE(CDouble,Double)
--- HACK: Currently no long double in the FFI, so we simply re-use double
-FLOATING_TYPE(CLDouble,Double)
diff --git a/src/prelude/FFI/CTypes1.hs b/src/prelude/FFI/CTypes1.hs
new file mode 100644
index 0000000..81ba0f5
--- /dev/null
+++ b/src/prelude/FFI/CTypes1.hs
@@ -0,0 +1,20 @@
+{-# OPTIONS_COMPILE -cpp #-}
+module NHC.FFI
+	( CLong(..),    CULong(..),  CLLong(..),  CULLong(..)
+	) where
+
+import NonStdUnsafeCoerce
+import Int	( Int8,  Int16,  Int32,  Int64  )
+import Word	( Word8, Word16, Word32, Word64 )
+import Storable	( Storable(..) )
+-- import Data.Bits( Bits(..) )
+-- import NHC.SizedTypes
+import Monad	( liftM )
+import Ptr	( castPtr )
+
+#include "CTypes.h"
+
+INTEGRAL_TYPE(CLong,Int32)
+INTEGRAL_TYPE(CULong,Word32)
+INTEGRAL_TYPE(CLLong,Int64)
+INTEGRAL_TYPE(CULLong,Word64)
diff --git a/src/prelude/FFI/CTypes2.hs b/src/prelude/FFI/CTypes2.hs
new file mode 100644
index 0000000..7d66242
--- /dev/null
+++ b/src/prelude/FFI/CTypes2.hs
@@ -0,0 +1,22 @@
+{-# OPTIONS_COMPILE -cpp #-}
+module NHC.FFI
+	( -- Floating types, instances of: Eq, Ord, Num, Read, Show, Enum,
+	  -- Typeable, Storable, Real, Fractional, Floating, RealFrac, RealFloat
+	CFloat(..), CDouble(..), CLDouble(..)
+	) where
+
+import NonStdUnsafeCoerce
+import Int	( Int8,  Int16,  Int32,  Int64  )
+import Word	( Word8, Word16, Word32, Word64 )
+import Storable	( Storable(..) )
+-- import Data.Bits( Bits(..) )
+-- import NHC.SizedTypes
+import Monad	( liftM )
+import Ptr	( castPtr )
+
+#include "CTypes.h"
+
+FLOATING_TYPE(CFloat,Float)
+FLOATING_TYPE(CDouble,Double)
+-- HACK: Currently no long double in the FFI, so we simply re-use double
+FLOATING_TYPE(CLDouble,Double)
diff --git a/src/prelude/FFI/CTypesExtra.hs b/src/prelude/FFI/CTypesExtra.hs
index ba3f15b..7cbdcbb 100644
--- a/src/prelude/FFI/CTypesExtra.hs
+++ b/src/prelude/FFI/CTypesExtra.hs
@@ -20,6 +20,8 @@ import Storable	( Storable(..) )
 import Monad	( liftM )
 import Ptr	( castPtr )
 import CTypes
+import CTypes1
+import CTypes2
 
 #include "CTypes.h"
 
diff --git a/src/prelude/FFI/FFI.hs b/src/prelude/FFI/FFI.hs
index 9d91e57..0c29394 100644
--- a/src/prelude/FFI/FFI.hs
+++ b/src/prelude/FFI/FFI.hs
@@ -217,6 +217,8 @@ import MarshalUtils	-- routines for basic marshalling
 import MarshalError	-- routines for basic error-handling
 
 import CTypes		-- newtypes for various C basic types
+import CTypes1
+import CTypes2
 import CTypesExtra	-- types for various extra C types
 import CStrings		-- C pointer to array of char
 import CString		-- nhc98-only
diff --git a/src/prelude/FFI/Makefile b/src/prelude/FFI/Makefile
index 99065f8..e229672 100644
--- a/src/prelude/FFI/Makefile
+++ b/src/prelude/FFI/Makefile
@@ -18,7 +18,7 @@ EXTRA_C_FLAGS	=
 SRCS = \
 	Addr.hs Ptr.hs FunPtr.hs Storable.hs \
 	ForeignObj.hs ForeignPtr.hs Int.hs Word.hs \
-	CError.hs CTypes.hs CTypesExtra.hs CStrings.hs \
+	CError.hs CTypes.hs CTypes1.hs CTypes2.hs CTypesExtra.hs CStrings.hs \
 	MarshalAlloc.hs MarshalArray.hs MarshalError.hs MarshalUtils.hs \
 	StablePtr.hs
 
@@ -38,12 +38,12 @@ Word.hs: Word.hs.cpp
 # dependencies generated by hmake -Md: (and hacked by MW)
 ${OBJDIR}/MarshalError.$O: ${OBJDIR}/Ptr.$O 
 ${OBJDIR}/MarshalUtils.$O: ${OBJDIR}/Ptr.$O ${OBJDIR}/Storable.$O \
-	${OBJDIR}/MarshalAlloc.$O ${OBJDIR}/CTypes.$O ${OBJDIR}/CTypesExtra.$O 
+	${OBJDIR}/MarshalAlloc.$O ${OBJDIR}/CTypes.$O ${OBJDIR}/CTypes1.$O ${OBJDIR}/CTypes2.$O ${OBJDIR}/CTypesExtra.$O
 ${OBJDIR}/MarshalArray.$O: ${OBJDIR}/Ptr.$O ${OBJDIR}/Storable.$O \
 	${OBJDIR}/MarshalAlloc.$O ${OBJDIR}/MarshalUtils.$O 
-${OBJDIR}/CTypesExtra.$O: ${OBJDIR}/Int.$O ${OBJDIR}/Word.$O ${OBJDIR}/CTypes.$O
+${OBJDIR}/CTypesExtra.$O: ${OBJDIR}/Int.$O ${OBJDIR}/Word.$O ${OBJDIR}/CTypes.$O ${OBJDIR}/CTypes1.$O ${OBJDIR}/CTypes2.$O
 ${OBJDIR}/CTypes.$O: ${OBJDIR}/Int.$O ${OBJDIR}/Word.$O ${OBJDIR}/Storable.$O \
-	${OBJDIR}/Ptr.$O 
+	${OBJDIR}/Ptr.$O ${OBJDIR}/CTypes1.$O ${OBJDIR}/CTypes2.$O
 ${OBJDIR}/CStrings.$O: ${OBJDIR}/MarshalArray.$O ${OBJDIR}/CTypes.$O \
 	${OBJDIR}/Ptr.$O ${OBJDIR}/Word.$O
 ${OBJDIR}/MarshalAlloc.$O: ${OBJDIR}/Ptr.$O ${OBJDIR}/Storable.$O \
diff --git a/src/prelude/FFI/MarshalAlloc.hs b/src/prelude/FFI/MarshalAlloc.hs
index 34ac7b3..5b43554 100644
--- a/src/prelude/FFI/MarshalAlloc.hs
+++ b/src/prelude/FFI/MarshalAlloc.hs
@@ -14,6 +14,8 @@ import ForeignPtr (FinalizerPtr(..))
 import Storable
 import CError
 import CTypes
+import CTypes1
+import CTypes2
 import CTypesExtra (CSize)
 import NHC.DErrNo
 
diff --git a/src/prelude/FFI/MarshalUtils.hs b/src/prelude/FFI/MarshalUtils.hs
index 312719b..bd9d149 100644
--- a/src/prelude/FFI/MarshalUtils.hs
+++ b/src/prelude/FFI/MarshalUtils.hs
@@ -29,6 +29,8 @@ import Ptr
 import Storable
 import MarshalAlloc
 import CTypes
+import CTypes1
+import CTypes2
 import CTypesExtra
 
 -- combined allocation and marshalling
-- 
2.11.0

After all this it’s time for a break. Run the following commands for a long break:

cd ${NHCDIR}/src/prelude
time make NHC98COMP=$NHCDIR/hugs-nhc

After the break—it took more than two hours on my laptop—you should see output like this:

ranlib /path/to/nhc98-1.22/lib/x86_64-Linux/Prelude.a

Congratulations! You now have a native nhc98 prelude library!

Building hmake

The compiler and additional Haskell libraries all require a tool called “hmake” to automatically order dependencies, so we’ll try to build it next. There’s just a small problem with one of the source files: src/hmake/FileName.hs contains the name “Niklas Röjemo” and the compiler really does not like the umlaut. With apologies to Niklas we change the copyright line to appease the compiler.

cd $NHCDIR/src/hmake
mv FileName.hs{,.broken}
tr &apos\366&apos &aposo&apos < FileName.hs.broken > FileName.hs
rm FileName.hs.broken
NHC98COMP=$NHCDIR/hugs-nhc make HC=$NHCDIR/script/nhc98

To be continued

Unfortunately, the hmake tools are not working. All of the tools (e.g. MkConfig) fail with an early segmentation fault. There must be an error in the runtime, likely in src/runtime/Kernel/mutator.c where bytecode for heap and stack operations is interpreted. One thing that looks like a problem is statements like this:

*--sp = (NodePtr) constptr[-HEAPOFFSET(ip[0])];

constptr is NULL, so this seems to be just pointer arithmetic expressed in array notation. These errors can be fixed by rewriting the statement to use explicit pointer arithmetic:

*--sp = (NodePtr) (constptr + (-HEAPOFFSET(ip[0])));

Unfortunately, this doesn’t seem to be enough as there is another segfault in the handling of the EvalTOS label. IND_REMOVE is applied to the contents of the stack pointer, which turns out to be 0x10, which just doesn’t seem right. IND_REMOVE removes indirection by following pointer addresses until the value stored at the given address does not look like an address. This fails because 0x10 does look like an address—it’s just invalid. I have enabled a bunch of tracing and debugging features, but I don’t fully understand how the nhc98 runtime is supposed to work.

Judging from mails on the nhc-bugs and nhc-users lists I see that I’m not the only one experiencing segfaults. This email suggests that segfaults are “associated with changes in the way gcc lays out static arrays of bytecodes, e.g. by putting extra padding space between arrays that are supposed to be adjacent.” I may have to try different compiler flags or an older version of GCC; I only tried with GCC 4.9.4 but the Debian package for nhc98 used version 2.95 or 3.3.

For completeness sake here’s the trace of the failing execution of MkProg:

(gdb) run
Starting program: /path/to/nhc98-1.22/lib/x86_64-Linux/MkProg 
ZAP_ARG_I1		hp=0x80c5010 sp=0x8136a10 fp=0x8136a10 ip=0x8085140
NEEDHEAP_I32	hp=0x80c5010 sp=0x8136a10 fp=0x8136a10 ip=0x8085141
HEAP_CVAL_N1	hp=0x80c5010 sp=0x8136a10 fp=0x8136a10 ip=0x8085142
HEAP_CVAL_I3	hp=0x80c5014 sp=0x8136a10 fp=0x8136a10 ip=0x8085144
HEAP_OFF_N1		hp=0x80c5018 sp=0x8136a10 fp=0x8136a10 ip=0x8085145
PUSH_HEAP		hp=0x80c501c sp=0x8136a10 fp=0x8136a10 ip=0x8085147
HEAP_CVAL_I4	hp=0x80c501c sp=0x8136a0c fp=0x8136a10 ip=0x8085148
HEAP_CVAL_I5	hp=0x80c5020 sp=0x8136a0c fp=0x8136a10 ip=0x8085149
HEAP_OFF_N1		hp=0x80c5024 sp=0x8136a0c fp=0x8136a10 ip=0x808514a
PUSH_CVAL_P1	hp=0x80c5028 sp=0x8136a0c fp=0x8136a10 ip=0x808514c
PUSH_I1		    hp=0x80c5028 sp=0x8136a08 fp=0x8136a10 ip=0x808514e
ZAP_STACK_P1	hp=0x80c5028 sp=0x8136a04 fp=0x8136a10 ip=0x808514f
EVAL		    hp=0x80c5028 sp=0x8136a04 fp=0x8136a10 ip=0x8085151
eval: evalToS

Program received signal SIGSEGV, Segmentation fault.
0x0804ac27 in run (toplevel=0x80c5008) at mutator.c:425
425		IND_REMOVE(nodeptr);

ip is the instruction pointer, which points at the current element in the bytecode stream. fp is probably the frame pointer, sp the stack pointer, and hp the heap pointer. The implementation notes for nhc98 will probably be helpful in solving this problem.

Anyway, that’s where I’m at so far. If you are interested in these kinds of problems or other bootstrapping projects, consider joining the efforts of the Bootstrappable Builds project!

Footnotes:

1
It is unclear when exactly the release was made, but any time between 1991 and 1993 seems likely.

Thursday, 05 January 2017

Review of 10 Vahdam’s Assam teas

Hook’s Humble Homepage | 14:15, Thursday, 05 January 2017

I had the pleasure of sampling 10 Assam teas from Vahdam (a very well chosen birthday gift from my fiancée).

First a little bit about the company. The company is Indian and according to their website deals directly with the plantations and tea growers for a fairer trade and better quality (plantation to shop 24-72h).

Once the teas were ordered, they arrived in a timely manner in and were very carefully packed (even the cardboard box was hand-stitched into a cloth) and included two complementary Darjeeling samples. I did have some issues with their (new) web shop, but found their support very helpful and quick.

All teas were from 2016 (I received them in late 2016 as well), AFAICR all were from the summer pickings / second flush.

The single-estates had the exact date of picking on them as well as the number of invoice they were bought under; while the blends had the month of blending/packaging.

Now back to the important part – the teas. All ten I found to be of superior quality and was delighted to sample the surprisingly wide array of taste the Assam region produces.

I was able to get two steeps (pearly boil, 4 minutes) out of all of them and found most of them perfectly enjoyable without either milk or sugar. Still, for most I prefer adding both as I like the rounded mellowness that (full) milk and (light brown or rock) sugar bring to Assam teas.

Here are my thoughts on them. If I had to pick out my favourites, it would be the Engima and Royal Breakfast, but all of the single-estates brought something else to the table, so I it is very likely they will be constantly rotating in my tea cupboard.

Single-estate

Assam Enigma Second Flush Black Tea

A brilliantly complex Assam

Among all the Assams I have ever tasted, this is one of the most interesting ones.

Initially you are greeted by the sweet smell of cinnamon of the dry leaves, which surprisingly disappears as soon as the leaves submerge in hot water.

With milk and just a small teaspoon of sugar, the tea produces a surprisingly complex aroma for an Assam – the predominant taste is of quality flower/berry honey with a hint of caramel, followed by an almost fruity and woody finish.

As most of the ten Vahdam’s Assams I have had the pleasure of sampling, it is perfectly fine without milk and sugar, but I do enjoy it more with just a dash of both :)

…truly an enigma, yet a sweet one!

Bokel Assam Second Flush Black Tea

Very pleasant aroma, reminiscent of cocoa

I made the “mistake” of reading the description before sipping it and cannot but agree that vanilla and cocoa notes permeate the taste.

As I am used to strong tea, I would be willing to take this even in the evenings. With a good book, some chocolate confectionery, this should be a great match!

Gingia Premium Assam Second Flush Black Tea

Light-footed and reminiscent of pu-ehr

It is a rare occasion that I enjoy an Assam more without milk than with it, but Gingia Premium is one of them.

What this tea reminds me the most is that basic taste of a pu-erh (but without its typical complex misty aroma). The first sip also brought cold-brew coffee to mind, but the association faded with the idea of the pu-erh.

Nahorhabi Classic Assam Second Flush Black Tea

Great, somewhat fruity daily driver

I find it very enjoyable and surprisingly fruity for an Assam. I usually drink Assam with a bit of milk and one teaspoon of brown sugar, but as some other reviewer noted, this tea is not too bitter to go fine without either as well.

I could get used to using this as my daily cuppa. Most likely I will come back to this Nahorhabi again.

Halmari Clonal Premium Assam Second Flush Black Tea

Very malty, but not my favourite

The Halmari Clonal Premium has a very round and malty body.

Whether with or without milk, you can also feel the chocolatey notes. But without milk (and sugar) its sweetness becomes a lot more apparent.

In the second steep, the malty-ness comes to the foreground even more. Without milk it might even come across a bit like a (Korean) barley tea.

In a way I really like it, but personally, I associate such malty-ness too much with non-caffeinated drinks such as barley coffee, barley tea, Ovomaltine and Horlicks, to truly enjoy it. As such, I will probably not be buying it often, but if malty is what you are after – this is a really good choice.

Blends

Assam Exotic Second Flush Black Tea

A good representative of its kind

I found this Assam to be predominantly malty, but paired up with foresty notes. Quite an enjoyable brew and what I would expect of a quality Assam.

Daily Assam Black Tea

Good daily driver

It is not super-strong either in taste or caffeine, but it does have a malty full body. At the very end it turns almost a bit watery, but not in a (too) displeasing way – depending on what you are after it might be either a positive or negative characteristic of this tea.

Great all-rounder and a daily driver, but if you are looking for something special, for the same money you can get nicer picks of Assam in this shop.

Personally I would pick almost any other Vahdam’s Assam over this one (apart from the Organic Breakfast), but solely because most of the time I am looking for something special in an Assam.

But if you are looking looking for a daily driver, this is a very fine choice.

Breakfast teas

Royal Breakfast Black Tea

One of my favourite breakfast teas

This is so far one of my favourite breakfast teas.

It is just robust enough, while displaying a nice earthy, woody flavour with a hint of chocolate. Quite enjoyable!

I usually enjoy mine with milk and sugar, but this one goes very well also without it (I will still usually drink it with both though).

Classic English Breakfast Black Tea

A slightly classier spin on a classic breakfast tea

This spin of the classic breakfast tea is a bit less robust than usual, as this pure Assam version simply is not tart in taste. As such it is enjoyable even without milk or sugar.

Personally I prefer my breakfast teas to be even stronger, to pick me up in the morning, but this one just about meets that condition. I can very much see it as a daily driver.

Organic Breakfast Black Tea

For me personally, too weak

It is not a bad tea at all, but personally I found it to watery for a breakfast tea..

That being said, I do like my breakfast tea to pack a punch, so do take my review with that in mind.

Also whoever reads this review, do take into account that I rated only for taste and feel. I did not assign any extra points for it being organic, as I do not think bio/eco/organic things should be of lesser quality than the stuff not carrying such certification.

hook out → sipping my last batch of Vahdam’s second flush Enigma and wondering how much of it to order


P.S. A copy of this review is on /r/tea subreddit and the discussion is there.

Process API for NoFlo components

Henri Bergius | 00:00, Thursday, 05 January 2017

It has been a while that I’ve written about flow-based programming — but now that I’m putting most of my time to Flowhub things are moving really quickly.

One example is the new component API in NoFlo that has been emerging over the last year or so.

Most of the work described here was done by Vladimir Sibirov from The Grid team.

Introducing the Process API

NoFlo programs consist of graphs where different nodes are connected together. These nodes can themselves be graphs, or they can be components written in JavaScript.

A NoFlo component is simply a JavaScript module that provides a certain interface that allows NoFlo to run it. In the early days there was little convention on how to write components, but over time some conventions emerged, and with them helpers to build well-behaved components more easily.

Now with the upcoming NoFlo 0.8 release we’ve taken the best ideas from those helpers and rolled them back into the noflo.Component base class.

So, how does a component written using the Process API look like?

// Load the NoFlo interface
var noflo = require('noflo');
// Also load any other dependencies you have
var fs = require('fs');

// Implement the getComponent function that NoFlo's component loader
// uses to instantiate components to the program
exports.getComponent = function () {
  // Start by instantiating a component
  var c = new noflo.Component();

  // Provide some metadata, including icon for visual editors
  c.description = 'Reads a file from the filesystem';
  c.icon = 'file';

  // Declare the ports you want your component to have, including
  // their data types
  c.inPorts.add('in', {
    datatype: 'string'
  });
  c.outPorts.add('out', {
    datatype: 'string'
  });
  c.outPorts.add('error', {
    datatype: 'object'
  });

  // Implement the processing function that gets called when the
  // inport buffers have packets available
  c.process(function (input, output) {
    // Precondition: check that the "in" port has a data packet.
    // Not necessary for single-inport components but added here
    // for the sake of demonstration
    if (!input.hasData('in')) {
      return;
    }

    // Since the preconditions matched, we can read from the inport
    // buffer and start processing
    var filePath = input.getData('in');
    fs.readFile(filePath, 'utf-8', (err, contents) {
      // In case of errors we can just pass the error to the "error"
      // outport
      if (err) {
        output.done(err);
        return;
      }

      // Send the file contents to the "out" port
      output.send({
        out: contents
      });
      // Tell NoFlo we've finished processing
      output.done();
    });
  });

  // Finally return to component to the loader
  return c;
}

Most of this is still the same component API we’ve had for quite a while: instantiation, component metadata, port declarations. What is new is the process function and that is what we’ll focus on.

When is process called?

NoFlo components call their processing function whenever they’ve received packets to any of their regular inports.

In general any new information packets received by the component cause the process function to trigger. However, there are some exceptions:

  • Non-triggering ports don’t cause the function to be called
  • Ports that have been set to forward brackets don’t cause the function to be called on bracket IPs, only on data

Handling preconditions

When the processing function is called, the first job is to determine if the component has received enough data to act. These “firing rules” can be used for checking things like:

  • When having multiple inports, do all of them contain data packets?
  • If multiple input packets are to be processed together, are all of them available?
  • If receiving a stream of packets is the complete stream available?
  • Any input synchronization needs in general

The NoFlo component input handler provides methods for checking the contents of the input buffer. Each of these return a boolean if the conditions are matched:

  • input.has('portname') whether an input buffer contains packets of any type
  • input.hasData('portname') whether an input buffer contains data packets
  • input.hasStream('portname') whether an input buffer contains at least one complete stream of packets

For convenience, has and hasData can be used to check multiple ports at the same time. For example:

// Fail precondition check unless both inports have a data packet
if (!input.hasData('in1', 'in2')) return;

For more complex checking it is also possible to pass a validation function to the has method. This function will get called for each information packet in the port(s) buffer:

// We want to process only when color is green
var validator = function (packet) {
  if (packet.data.color === 'green') {
    return true;
  }
  return false;
}
// Run all packets in in1 and in2 through the validator to
// check that our firing conditions are met
if (!input.has('in1', 'in2', validator)) return;

The firing rules should be checked in the beginning of the processing function before we start actually reading packets from the buffer. At that stage you can simply finish the run with a return.

Processing packets

Once your preconditions have been met, it is time to read packets from the buffers and start doing work with them.

For reading packets there are equivalent get functions to the has functions used above:

  • input.get('portname') read the first packet from the port’s buffer
  • input.getData('portname') read the first data packet, discarding preceding bracket IPs if any
  • input.getStream('portname') read a whole stream of packets from the port’s buffer

For get and getStream you receive whole IP objects. For convenience, getData returns just the data payload of the data packet.

When you have read the packets you want to work with, the next step is to do whatever your component is supposed to do. Do some simple data processing, call some remote API function, or whatever. NoFlo doesn’t really care whether this is done synchronously or asynchronously.

Note: once you read packets from an inport, the component activates. After this it is necessary to finish the process by calling output.done() when you’re done.

Sending packets

While the component is active, it can send packets to any number of outports using the output.send method. This method accepts a map of port names and information packets.

output.send({
  out1: new noflo.IP('data', "some data"),
  out2: new noflo.IP('data', [1, 2, 3])
});

For data packets you can also just send the data as-is, and NoFlo will wrap it to an information packet.

Once you’ve finished processing, simply call output.done() to deactivate the component. There is also a convenience method that is a combination of send and done. This is useful for simple components:

c.process(function (input, output) {
  var data = input.getData('in');
  // We just add one to the number we received and send it out
  output.sendDone({
    out: data + 1
  });
});

In normal situations there packets are transmitted immediately. However, when working on individual packets that are part of a stream, NoFlo components keep an output buffer to ensure that packets from the stream are transmitted in original order.

Component lifecycle

In addition to making input processing easier, the other big aspect of the Process API is to help formalize NoFlo’s component and program lifecycle.

NoFlo program lifecycle

The component lifecycle is quite similar to the program lifecycle shown above. There are three states:

  • Initialized: the component has been instantiated in a NoFlo graph
  • Activated: the component has read some data from inport buffers and is processing it
  • Deactivated: all processing has finished

Once all components in a NoFlo network have deactivated, the whole program is finished.

Components are only allowed to do work and send packets when they’re activated. They shouldn’t do any work before receiving input packets, and should not send anything after deactivating.

Generator components

Regular NoFlo components only send data associated with input packets they’ve received. One exception is generators, a class of components that can send packets whenever something happens.

Some examples of generators include:

  • Network servers that listen to requests
  • Components that wait for user input like mouse clicks or text entry
  • Timer loops

The same rules of “only send when activated” apply also to generators. However, they can utilize the processing context to self-activate as needed:

exports.getComponent = function () {
 var c = new noflo.Component();
 c.inPorts.add('start', { datatype: 'bang' });
 c.inPorts.add('stop', { datatype: 'bang' });
 c.outPorts.add('out', { datatype: 'bang' });
 // Generators generally want to send data immediately and
 // not buffer
 c.autoOrdering = false;

 // Helper function for clearing a running timer loop
 var cleanup = function () {
   // Clear the timer
   clearInterval(c.timer.interval);
   // Then deactivate the long-running context
   c.timer.deactivate();
   c.timer = null;
 }

 // Receive the context together with input and output
 c.process(function (input, output, context) {
   if (input.hasData('start')) {
     // We've received a packet to the "start" port
     // Stop the previous interval and deactivate it, if any
     if (c.timer) {
       cleanup();
     }
     // Activate the context by reading the packet
     input.getData('start');
     // Set the activated context to component so it can
     // be deactivated from the outside
     c.timer = context
     // Start generating packets
     c.timer.interval = setInterval(function () {
       // Send a packet
       output.send({
         out: true
       });
     }, 100);
     // Since we keep the generator running we don't
     // call done here
   }

   if (input.hasData('stop')) {
     // We've received a packet to the "stop" port
     input.getData('stop');
     if (!c.timer) {
       // No timers running, we can just finish here
       output.done();
       return;
     }
     // Stop the interval and deactivate
     cleanup();
     // Also call done for this one
     output.done();
   }
 });

 // We also may need to clear the timer at network shutdown
 c.shutdown = function () {
   if (c.timer) {
     // Stop the interval and deactivate
     cleanup();
   }
   c.emit('end');
   c.started = false;
 }
}

Time to prepare

NoFlo 0.7 included a preview version of the Process API. However, last week during the 33C3 conference we finished some tricky bits related to process lifecycle and automatic bracket forwarding that make it more useful for real-life NoFlo applications.

These improvements will land in NoFlo 0.8, due out soon.

So, if you’re maintaining a NoFlo application, now is a good time to give the git version a spin and look at porting your components to the new API. Make sure to report any issues you encounter!

We’re currently migrating all the hundred-plus NoFlo open source modules to latest build and testing process so that they can be easily updated to the new APIs when they land.

Friday, 30 December 2016

My goal for the new year: more Good News

Wunderbar Emporium | 11:01, Friday, 30 December 2016

I have worked on many Free Software and Freedom related projects in the past years. From that I have learned that we tend to write news on things that go wrong and need to be fixed. That’s of course very important but we also need to learn to communicate better our successes.

Let’s come up with an example. In Switzerland we have been working hard on a case concerning the release of software developed by a public entity under a Free License. In this case the sofware is called ‘OpenJustitia’ and is developed by the Swiss Federal Supreme Court. We also came up with an update where we lined out the steps that have been taken to improve the situation.

This autumn the situation has finally been resolved and local media (Berner Zeitung) even published a small (German) article on that:

Even more, there it’s now officially legal right to publish software developed by government institutions under a Free License.

I am sorry that we have not informed you earlier and my personal goal for the next year is to come up with more Good News.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English — mina86.com  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  Wunderbar Emporium  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog