Planet Fellowship (en)

Friday, 23 June 2017

Visiting ProgressBar HackerSpace in Bratislava

Evaggelos Balaskas - System Engineer | 11:34, Friday, 23 June 2017

When traveling, I make an effort to visit the local hackerspace. I understand that this is not normal behavior for many people, but for us (free / opensource advocates) is always a must.

This was my 4th week on Bratislava and for the first time, I had a couple free hours to visit ProgressBar HackerSpace.

For now, they are allocated in the middle of the historical city on the 2nd floor. The entrance is on a covered walkway (gallery) between two buildings. There is a bell to ring and automated (when members are already inside) the door is wide open for any visitor. No need to wait or explain why you are there!

Entering ProgressBar there is no doubt that you are entering a hackerspace.


You can view a few photos by clicking here: ProgressBar - Photos

And you can find ProgressBar on OpenStreet Map

Some cool-notable projects:

  • bitcoin vending machine
  • robot arm to fetch clubmate
  • magic wood to switch on/off lights
  • blinkwall
  • Cool T-shirts

their lab is fool with almost anything you need to play/hack with.

I was really glad to make time and visit them.

On brokeness, the live installer and being nice to people

Elena ``of Valhalla'' | 14:57, Friday, 23 June 2017

On brokeness, the live installer and being nice to people

This morning I've read this

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

Tuesday, 20 June 2017

No Place To Hide

Evaggelos Balaskas - System Engineer | 22:10, Tuesday, 20 June 2017


An Amazing Book!!!

Must Read !!

I’ve listened to the audiobook like in two days.
Couldnt leave it down.

Then organize a CryptoParty to your local hackerspace

Tag(s): books

Third Week of GSoC

vanitasvitae's blog » englisch | 20:05, Tuesday, 20 June 2017

Another week is has passed and the first evaluation phase slowly approaches. While I already fulfilled my goals (Jingle File Transfer using InBandBytestreams and SOCKS5Bytestreams), I still have a lot of work to do. The first working implementation I did is only so much – working. Barely. Now its time to learn from mistakes I made while I constructed my prototype and find better ways to do it in the next iteration. This is what I was up to in the past week and what will keep me from my usual sleep cycle for the coming week(s).

I spent the past week doing ground work and writing utility classes which will later allow me to send jingle actions in a clean way. The prototype implementation had all constructions of Jingle elements inside of the control flow, which made reading the code very hard. This will change in the next iteration.

While I worked on my implementation(s), I detected some errors in the XEPs involved and created pull requests against the xsf/xeps repository. In other spots I found some unclarities, but unfortunately my questions on the xsf chat were left unanswered. In some cases I found the solution myselves though.

Also I began upstreaming some changes and additions to the Smack repository. Parsers and elements of IBB have already been merged, as well as some more additions to the HashManager (XEP-0300) I created earlier, and some tests and fixes for the existing Jingle framework. Still open are my PR for SOCKS5 parsers and the first parts of the Jingle file transfer package.

I also dedicated a tiny little bit of my spare time to a non-GSoC project around a blog post on how to create an OMEMO capable chat client using Smack in less than 200 lines of code. The source code of the example application can be found in the FSFE’s brand new git repository. Unfortunately I also found a small bug in my OMEMO code that I have to fix sometime in the next weeks (nothing crucial, just some annoying faulty behavior).

I plan to spend the coming week working on my Jingle code, so that I have a mostly working framework when the evaluation phase begins.

Thats all for now. Happy Hacking :)

Monday, 19 June 2017

Two hackathons in a week: thoughts on NoFlo and MsgFlo

Henri Bergius | 00:00, Monday, 19 June 2017

Last week I participated in two hackathons, events where a group of strangers would form a team for two or three days and build a product prototype. In the end all teams pitch their prototypes, and the best ones would be given some prizes.

Hackathons are typically organized to get feedback from developers on some new API or platform. Sometimes they’re also organized as a recruitment opportunity.

Apart from the free beer and camaraderie, I like going to hackathons since they’re a great way to battle test the developer tools I build. The time from idea to having to have a running prototype is short, people are used to different ways of working and different toolkits.

If our tools and flow-based programming work as intended, they should be ideal for these kind of situations.

Minds + Machines hackathon and Electrocute

Minds + Machines hackathon was held on a boat and focused on decarbonizing power and manufacturing industries. The main platform to work with was Predix, GE’s PaaS service.

Team Electrocute

Our project was Electrocute, a machine learning system for forecasting power consumption in a changing climate.

1.5°C is the global warming target set by the Paris Agreement. How will this affect energy consumption? What kind of generator assets should utilities deploy to meet these targets? When and how much renevable energy can be utilized?

The changing climate poses many questions to utilities. With Electrocute’s forecasting suite power companies can have accurate answers, on-demand.

Electrocute forecasts

The system was built with a NoFlo web API server talking over MsgFlo with a Python machine learning backend. We also built a frontend where users could see the energy usage forecasts on a heatmap.

NoFlo-Xpress in action

Unfortunately we didn’t win this one.

Recoding Aviation and Skillport

Recoding Aviation was held at hub:raum and focused on improving the air travel experience through usage of open APIs offered by the various participating airports.

Team Skillport

Skillport was our project to make long layovers more bearable by connecting people who’re stuck at the airport at the same time.

Long layovers suck. But there is ONE thing amazing about them: You are surrounded by highly skilled people with interesting stories from all over the world. It sometimes happens that you meet someone randomly - we all have a story like that. But usually we are too shy and lazy to communicate and see how we could create a valuable interaction. You never know if the other person feels the same.

We built a mobile app that turns airports into a networking, cultural exchange and knowledge sharing hub. Users tell each other through the app that they are available to meet and what value they can bring to an interaction.

The app connected with a J2EE API service that then communicated over MsgFlo with NoFlo microservices doing all the interactions with social and airport APIs. We also did some data enrichment in NoFlo to make smart recommendations on meeting venues.

MsgFlo in action

This time our project went well with the judges and we were selected as the winner of the Life in between airports challenge. I’m looking forward to the helicopter ride over Berlin!

Category winners

Skillport also won a space at hub:raum, so this might not be the last you’ll hear of the project…

Lessons learned

Benefits of a message queue architecture

I’ve written before on why to use message queues for microservices, but that post focused more on the benefits for real-life production usage.

The problems and tasks for a system architecture in a hackathon are different. Since the time is short, you want to enable people to work in parallel as much as possible without stepping on each other’s toes. Since people in the team come from different backgrounds, you want to enable a heterogeneous, polyglot architecture where each developer can use the tools they’re most productive with.

MsgFlo is by its nature very suitable for this. Components can be written in any language that supports the message queue used, and we have convenience libraries for many of them. The discovery mechanism makes new microservices appear on the Flowhub graph as soon as they start, enabling services to be wired together quickly.

Mock early, mock often

Mocks are a useful way to provide a microservice to the other team members even before the real implementation is ready.

For example in the GE Predix hackathon, we knew the machine learning team would need quite a bit of time to build their model. Until that point we ran their microservice with a simple msgflo-python component that just gave random() as the forecast.

This way everybody else was able to work with the real interface from the get-go. When the learning model was ready we just replaced that Python service, and everything was live.

Mocks can be useful also in situations where you have a misbehaving third-party API.

Don’t forget tests

While shooting for a full test coverage is probably not realistic within the time constraints of a hackathon, it still makes sense to have at least some “happy path” tests. When you’re working with multiple developers each building a different parts of the service, interface tests serve a dual purpose:

  • They show the other team members how to use your service
  • They verify that your service actually does what it is supposed to

And if you’re using a continuous integration tool like Travis, the tests will help you catch any breakages quickly, and also ensure the services work on a clean installation.

For a message queue architecture, fbp-spec is a great tool for writing and running these interface tests.

Talk with the API providers

The reason API and platform providers organize these events is to get feedback. As a developer that works with tons of different APIs, this is a great opportunity to make sure your ideas for improvement are heard.

On the flip side, this usually also means the APIs are in a pretty early stage, and you may be the first one using them in a real-world project. When the inevitable bugs arise, it is a good to have a channel of communications open with the API provider on site so you can get them resolved or worked around quickly.

Room for improvement

The downside of the NoFlo and MsgFlo stack is that there is still quite a bit of a learning curve. NoFlo documentation is now in a reasonable place, but with Flowhub and MsgFlo we have tons of work ahead on improving the onboarding experience.

Right now it is easy to work with if somebody sets it up properly first, but getting there is a bit tricky. Fixing this will be crucial for enabling others to benefit from these tools as well.

Friday, 16 June 2017

Travel piecepack v0.1

Elena ``of Valhalla'' | 16:06, Friday, 16 June 2017

Travel piecepack v0.1


A set of generic board game pieces is nice to have around in case of a sudden spontaneous need of gaming, but carrying my full set takes some room, and is not going to fit in my daily bag.

I've been thinking for a while that an half-size set could be useful, and between yesterday and today I've actually managed to do the first version.

It's (2d) printed on both sides of a single sheet of heavy paper, laminated and then cut, comes with both the basic suites and the playing card expansion and fits in a mint tin divided by origami boxes.

It's just version 0.1 because there are a few issues: first of all I'm not happy with the manual way I used to draw the page: ideally it would have been programmatically generated from the same svg files as the 3d piecepack (with the ability to generate other expansions), but apparently reading paths from an svg and writing it in another svg is not supported in an easy way by the libraries I could find, and looking for it was starting to take much more time than just doing it by hand.

I also still have to assemble the dice; in the picture above I'm just using the ones from the 3d-printed set, but they are a bit too big and only four of them fit in the mint tin. I already have the faces printed, so this is going to be fixed in the next few days.

Source files are available in the same git repository as the 3d-printable piecepack, with the big limitation mentioned above; updates will also be pushed there, just don't hold your breath for it :)

Thursday, 15 June 2017

KDE Applications 17.08 Schedule finalized

TSDgeos' blog | 21:50, Thursday, 15 June 2017

It is available at the usual place

Dependency freeze is in 4 weeks and Feature Freeze in 5 weeks, so hurry up!

Wednesday, 14 June 2017

Tutorial: Home-made OMEMO client

vanitasvitae's blog » englisch | 22:19, Wednesday, 14 June 2017

The german interior minister conference recently decided that the best way to fight terrorism is passing new laws that allow the government to demand access to communication from messengers like WhatsApp and co. Very important: Messengers like WhatsApp. Will even free software developers see requests to change their messengers to allow government access to communications in the future? If it comes so far, how are we then still possible to protect our communications?

The answer could be: Build your own messenger. I want to demonstrate, how simple it is to create a very basic messenger that allows you to send and receive end-to-end encrypted text messages via XMPP using Smack. We will use Smacks latest new feature – OMEMO support to create a very simple XMPP based command line chat application that uses state of the art encryption. I assume, that you all know, what XMPP is. If not, please read it up on Wikipedia. Smack is a java library that makes it easy to use XMPP in an application. OMEMO is basically the Signal protocol for XMPP.

So lets hop straight into it.
In my example, I import smack as a gradle dependency. That looks like this:

apply plugin: 'java'
apply plugin: 'idea'

repositories {
    maven {
        url ''

ext {

dependencies {
    compile "org.igniterealtime.smack:smack-java7:$smackVersion"
    compile "org.igniterealtime.smack:smack-omemo-signal:$smackVersion"
    compile "org.igniterealtime.smack:smack-resolver-dnsjava:$smackVersion"
    compile "org.igniterealtime.smack:smack-tcp:$smackVersion"

//Pack dependencies into the jar
jar {
    from(configurations.compile.collect { it.isDirectory() ? it : zipTree(it) }) {
    exclude "META-INF/*.SF"
    exclude "META-INF/LICENSE"
    manifest {
            'Main-Class': 'Messenger'

Now we can start the main function of our client. We need to create a connection to a server and log in to go online. Lets assume, that the user passes username and password as arguments to our main function. For sake of simplicity, we’ll not catch any errors like wrong number of parameters etc. Also we want to get notified of incoming chat messages and we want to send messages to others.

public class Messenger {

    private AbstractXMPPConnection connection;
    private static Scanner scanner;

    public static void main(String[] args) throws Exception {
        String username = args[0];
        String password = args[1];
        Messenger messenger = new Messenger(username, password);

        scanner = new Scanner(;
        while(true) {
            String input = scanner.nextLine();

            if (input.startsWith("/quit")) {
            if (input.isEmpty()) {

    public Messenger(String username, String password) throws Exception {
        connection = new XMPPTCPConnection(username, password);
        connection = connection.connect();

                (from, message, chat) -> System.out.println(from.asBareJid() + ": " + message)

        System.out.println("Logged in");

    private void handleInput(String input) throws Exception {
        String[] split = input.split(" ");
        String command = split[0];

        switch (command) {
            case "/say":
                if (split.length > 3) {
                    String recipient = split[1];
                    EntityBareJid recipientJid = JidCreate.entityBareFrom(recipient);

                    StringBuilder message = new StringBuilder();
                    for (int i=2; i<split.length; i++) message.append(split[i]);


If we now compile this code and execute it using credentials of an existing account, we can already log in and start chatting with others using the /say command (eg. /say Hi Bob!). But our communications are unencrypted right now (aside from tls transport encryption). Lets change that next. We want to use OMEMO encryption to secure our messages, so we utilize Smacks new OmemoManager which handles OMEMO encryption. For that purpose, we need a new private variable which will hold our OmemoManager. Also we make some changes to the constructor.

private OmemoManager omemoManager;

public Messenger(String username, String password) throws Exception {
    connection = new XMPPTCPConnection(username, password);
    connection = connection.connect();

    //additions begin here
    //path where keys get stored
    OmemoConfiguration.setFileBasedOmemoStoreDefaultPath(new File("path"));
    omemoManager = OmemoManager.getInstanceFor(connection);

    //Listener for incoming OMEMO messages
    omemoManager.addOmemoMessageListener(new OmemoMessageListener() {
        public void onOmemoMessageReceived(String decryptedBody, Message encryptedMessage,
                        Message wrappingMessage, OmemoMessageInformation omemoInformation) {
            System.out.println("(O) " + encryptedMessage.getFrom() + ": " + decryptedBody);

        public void onOmemoKeyTransportReceived(CipherAndAuthTag cipherAndAuthTag, Message message,
                        Message wrappingMessage, OmemoMessageInformation omemoInformation) {
            //Not needed

            (from, message, chat) -> System.out.println(from.asBareJid() + ": " + message)
    //additions end here.
    System.out.println("Logged in");

Also we must add two new commands that are needed to control OMEMO. /omemo is similar to /say, but will encrypt the message via OMEMO. /trust is used to trust an identity. Before you can send a message, you have to decide, whether you want to trust or distrust an identity. When you call the trust command, the client will present you with a fingerprint which you have to compare with your chat patner. Only if the fingerprint matches, you should trust it. We add the following two cases to the handleInput’s switch case environment:

case "/omemo":
    if (split.length > 2) {
        String recipient = split[1];
        EntityBareJid recipientJid = JidCreate.entityBareFrom(recipient);

        StringBuilder message = new StringBuilder();
        for (int i=2; i<split.length; i++) message.append(split[i]);

        Message encrypted = null;
        try {
            encrypted = OmemoManager.getInstanceFor(connection).encrypt(recipientJid, message.toString());
        // In case of undecided devices
        catch (UndecidedOmemoIdentityException e) {
            System.out.println("Undecided Identities: ");
            for (OmemoDevice device : e.getUntrustedDevices()) {
        //In case we cannot establish session with some devices
        catch (CannotEstablishOmemoSessionException e) {
            encrypted = omemoManager.encryptForExistingSessions(e, message.toString());

        if (encrypted != null) {

case "/trust":
    if (split.length == 2) {
        BareJid contact = JidCreate.bareFrom(split[1]);
        HashMap<OmemoDevice, OmemoFingerprint> fingerprints =

        //Let user decide
        for (OmemoDevice d : fingerprints.keySet()) {
            System.out.println("Trust (1), or distrust (2)?");
            int decision = Integer.parseInt(scanner.nextLine());
            if (decision == 1) {
               omemoManager.trustOmemoIdentity(d, fingerprints.get(d));
            } else {
                omemoManager.distrustOmemoIdentity(d, fingerprints.get(d));

Now we can trust contact OMEMO identities using /trust and send them encrypted messages using /omemo Hi Bob!. When we receive OMEMO messages, they are indicated by a “(O)” in front of the sender.
If we want to go really fancy, we can let our messenger display, whether received messages are encrypted using a trusted key. Unfortunately, there is no convenience method for this available yet, so we have to do a small dirty workaround. We modify the onOmemoMessageReceived method of the OmemoMessageListener like this:

public void onOmemoMessageReceived(String decryptedBody, Message encryptedMessage,
            Message wrappingMessage, OmemoMessageInformation omemoInformation) {
    //Get identityKey of sender
    IdentityKey senderKey = (IdentityKey) omemoInformation.getSenderIdentityKey().getIdentityKey();
    OmemoService<?,IdentityKey,?,?,?,?,?,?,?> service = (OmemoService<?,IdentityKey,?,?,?,?,?,?,?>) OmemoService.getInstance();

    //get the fingerprint of the key
    OmemoFingerprint fingerprint = service.getOmemoStoreBackend().keyUtil().getFingerprint(senderKey);
    //Lookup trust status
    boolean trusted = omemoManager.isTrustedOmemoIdentity(omemoInformation.getSenderDevice(), fingerprint);

    System.out.println("(O) " + (trusted ? "T" : "D") + " " + encryptedMessage.getFrom() + ": " + decryptedBody);

Now when we receive a message from a trusted identity, there will be a “T” before the message, otherwise there is a “D”.
I hope I could give a brief introduction on how to use Smacks OMEMO support. You now have a basic chat client, that is capable of exchanging multi-end-to-multi-end encrypted messages with other XMPP clients that support OMEMO. All took less than 200 lines of code! Now its up to you to add additional features like support for message carbons, offline messages and co. Spoiler: Its not hard at all :)
You can find the source code of this tutorial in the FSFE’s git repository.

When the government is unable or simply not willing to preserve your privacy, you’ll have to do it yourself.

Happy Hacking :)

Croissants, Qatar and a Food Computer Meetup in Zurich - fsfe | 19:53, Wednesday, 14 June 2017

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

Technoshamanism and Wasted Ontologies

agger's Free Software blog | 14:11, Wednesday, 14 June 2017

Interview with Fabiane M. Borges published on May 21, 20171

By Bia Martins and Reynaldo Carvalho – translated by Carsten Agger

Fabiane M. Borges, writer and researcher

Fabiane M. Borges, writer and researcher

Also available in PDF format

In a state of permanent warfare and fierce disputes over visions of the future, technoshamanism emerges as a resistance and as an endeavour to influence contemporary thinking, technological production, scientific questions, and everyday practices. This is how the Brazilian Ph.D. in clinical psychology, researcher and essayist Fabiane M. Borges presents this international network of collaboration which unites academics, activists, indigenous people and many more people who are interested in a search for ideas and practices which go beyond the instrumental logic of capital. In this interview with Em Rede, she elaborates her reflections on technoshamanism as platform for producing knowledge and indicates some of the experiences that were made in this context.

At first, technology and shamanism seem like contradictory notions or at least difficult to combine. The first refers to the instrumental rationalism that underlies an unstoppable developmentalist project. The second makes you think of indigenous worldviews, healing rituals and altered states of consciousness. What is the result of this combination?

In a text that I wrote for the magazine Geni2 in 2015, I said this: that techno + shamanism has three quite evident meanings:

  1. The technology of shamanism (shamanism seen as a technology for the production of knowledge);
  2. The shamanism of technology (the pursuit of shamanic powers through the use of technology);
  3. The combination of these two fields of knowledge historically obstructed by the Church and later by science, especially in the transition from the Middle Ages to the Renaissance.

Each of these meanings unfolds into many others, but here is an attempt to discuss each one:

1) When we perceive shamanism not as tribal religions or as the beliefs of archaic people (as is still very common) but as a technology of knowledge production, we radically change the perception of its meaning. The studies of e.g. ayahuasca show that intensified states of consciousness produce a kind of experience which reshapes the state of the body, broadening the spectrum of sensation, affection, and perception. These “plants of power” are probably that which brings us closest to the “magical thinking” of native communities and consequently to the shamanic consciousness – that is, to that alternative ontology, as Eduardo Viveiros de Castro alerts us when he refers to the Amerindian ontology in his book Cannibal Metaphysics3, or Davi Kopenawa with his shamanic education with yakoana, as described in The Falling Sky4. It is obviously not only through plants of power that we can access this ontology, but they are a portal which draws us singularly near this way of seeing the world, life itself. Here, we should consider the hypotheses of Jeremy Narby in his The Cosmic Serpent: DNA and origins of knowledge where he explains that the indigenous knowledge of herbs, roots and medicine arises partly from dreams and from the effects of entheogens.

When I say that shamanism is a technology of knowledge production, it is because it has its own methods for constructing narratives, mythologies, medicine and healing as well as for collecting data and creating artifacts and modes of existence, among other things. So this is neither ancient history nor obsolete – it lives on, pervading our technological and mass media controlled societies and becoming gradually more appreciated, especially since the 1960s where ecological movements, contact with traditional communities and ways of life as well as with psychoactive substances all became popular, sometimes because of the struggles of these communities and sometimes because of an increased interest in mainstream society. A question arose: If we were to recuperate these wasted ontologies with the help of these surviving communities and of our own ruins of narratives and experiences, would we not be broadening the spectrum of technology itself to other issues and questions?

2) The shamanism of technology. It is said that such theories as parallel universes, string theory and quantum physics, among others, bring us closer to the shamanic ontology than to the theological/capitalist ontology which guides current technological production. But although this current technology is geared towards war, pervasive control and towards over-exploitation of human, terrestrial and extra-terrestrial resources, we still possess a speculative, curious and procedural technology which seeks to construct hypotheses and open interpretations which are not necessarily committed to the logic of capital (this is the meaning of the free software, DIY and open source movements in the late 20th and early 21st century).

We are very interested in this speculative technology, since in some ways it represents a link to the lost ancestral knowledge. This leads us directly to point 3) which is the conjunction of technology with shamanism. And here I am thinking of an archeology or anarcheology, since in the search for a historical connection between the two, many things may also be freely invented (hyperstition). As I have explained in other texts, such as the Seminal Thoughts for a Possible Technoshamanism or Ancestrofuturism – Free Cosmogony – Rituals DIY, there was a Catholic theological effort against these ancestral knowledges, a historical inhibition that became more evident during the transition from the Middle Ages to the Renaissance with its inquisitions, bonfires, prisons, torture and demands for retraction. The technology which was originally a part of popular tradition and needs passed through a purification, a monotheist Christian refinement, and adhered to these precepts in order to survive.

In his book La comunidad de los espectros5, Fabián Ludueña Romandini discusses this link between science and Catholicism, culminating in a science that was structurally oriented towards becoming God, hence its tendency to omnipresence, omnipotence and omniscience. Its link to capital is widely discussed by Silvia Federici in her book Caliban and the Witch6, who states that the massacre against witches, healers, sorcerers, heretics and all who did not conform to the precepts of the church was performed in order to clear the way for the introduction of industrial society and capitalism. So two things must be taken into account here: first, that there has been a violent decimation of ancestral knowledge throughout Europe and its colonial extensions and secondly, that the relationship between science/technology and the wasted ontologies was sundered in favor of a Christian theological metaphysics.

Faced with this, techno + shamanism is an articulation which tries to consider this historical trauma, these lost yet not annihilated leftovers, and to recover (and reinvent) points of connection between technology and wasted ontologies, which in our case we call shamanism since it represents something preceding the construction of the monotheisms and because it is more connected to the processes of planet Earth, at least according to the readings that interest us. But there are several other networks and groups that use similar terms and allow other readings such as techno + magic, cyber + spirituality, techno + animism and gnoise (gnosis + noise), among others, all talking about more or less the same issues.

The result of this mixture is improbable. It functions as a resistance, an awakening, an attempt to influence contemporary thinking, technological practices, scientific questions as well as everyday practices. These are tension vectors that drive a change in the modes of existence and of relation to the Earth and the Cosmos, applied to the point where people are currently, causing them to associate with other communities with similar aspirations or desiring to expand their knowledge. These changes are gradually taking shape, whether with clay or silicium technology. But the thing is crazy, the process is slow and the enemy is enormous. Given the current level of political contention that we are currently experiencing in Brazil, associations and partnerships with traditional communities, be they indigenous, afro-Brazilian, Roma, aboriginal or activist settlements (the MST7 and its mystique), seems to make perfect sense. It is a political renewal mixed with ancestorfuturist worldviews.

You’ve pointed out that conceptually technoshamanism functions as a utopian, dystopian and entropic network of collaboration. What does this mean in practice?

Fundamentally, we find ourselves in a state of constant war, a fierce dispute between different visions of the future, between social and political ontologies and between nature and technology. In this sense, technoshamanism manifests itself as yet another contemporary network which tries to analyze, position itself with respect to and intervene in this context. It is configured as a utopian network because it harbors visionary germs of liberty, autonomy, equality of gender, ethnicity, class and people and of balance between the environment and society that have hitherto characterized revolutionary movements. It is dystopian because at the same time it includes a nihilistic and depressive vision which sees no way out of capitalism, is disillusioned by neoliberalism and feels itself trapped by the project of total, global control launched by the world’s owners. It sees a nebulous future without freedom, with all of nature destroyed, more competition and poverty, privation and social oppression. And it is entropic because it inhabits this paradoxical set of forces and maintains an improbable noise – its perpetual noisecracy, its state of disorganization and insecurity is continuous and is constantly recombining itself. Its improbability is its dynamism. It is within this regime of utopia, dystopia and entropy that it promotes its ideas and practices, which are sometimes convergent and sometimes divergent.

In practice, this manifests itself in individual and collective projects, be they virtual or face-to-face and in the tendencies that are generated from these. Nobody is a network, people are in it from time to time according to necessities, desires, possibilities, etc.

This network’s meetings take place in different countries, mainly in South America and Europe. Can you give some examples of experiences and knowledge which were transferred between these territories?

Some examples: Tech people who come from the European countries to the tecnoshamanism festivals and return doing permaculture and uniting with groups in their own countries in order to create collective rituals very close to the indigenous ones or collective mobilization for construction, inspired by the indigenous mutirão. Installation of agroforestry in a basically extractivist indigenous territory organized by foreigners or non-indigenous Brazilians working together with indigenous people. The implementation of an intranet system (peer-to-peer network) within indigenous territory (Baobáxia). Confluence of various types of healing practices in healing tents created during encounters and festivals, ranging from indigenous to oriental practices, from afro-Brazilian to electronic rituals, from Buddhist meditation to the herb bath of Brazilian healers, all of this creating generative spontaneous states where knowledge is exhanged and is subsequently transferred to different places or countries. Indigenous and non-indigenous bioconstructor’s knowledge of adobe, converging in collective construction work in MST’s squatted lands (this project is for the next steps). Artistic media practices, performance, live cinema, projection, music, and so on, that are passed on to groups that know nothing about this. In the end, technoshamanism is an immersive and experiential platform for exchanging knowledge. All of this is very much derived from the experiences of other networks and movements such as tactical media, digital liberty, homeless movements, submediology, metareciclagem, LGBTQ, Bricolabs, and many others. In the technoshamanism book, published in 2016, there are several practices that can serve as a reference.

Technoshamanism arose from networks linked to collaborative movements such as Free Software and Do It Yourself with the same demands for freedom and autonomy in relation to science and technology. To what extent has it proposed new interventions or new kinds of production in these fields? Can you give an example?

First is important to say that these movements of free software and DIY have changed. They have been mixed up with the neoliberal program, whether we’re talking about corporate software or about the makers, even though both movements remain active and are still spaces of invention. In the encounters and festivals, we are going as far is possible, considers our precarious nature, lack of dedicated funding or support from economically stronger institutions, we rely mainly on the knowledge of the participants of the network, which come into action in the places. I also know of cases where the festivals inspired the formation of groups of people who returned to their cities and continued to do work related to technological issues, whether in the countryside, in computer technology, and in art as well. Technoshamanism serves to inspire and perhaps empower projects that already function, but which technoshamanism endorses and excites.

I think that a fairly representative example is the agroforest, the Baobáxia system and the web radio Aratu that we implemented with the Pataxó in the Pará village. It is an exhange and simultanously a resistance that points to the question of collaboration and autonomy, remembering that all the processes of this planet are interdependent and that autonomy is really a path, an ideal which only works pragmatically and to the extent that it’s possible to practice it. So we’re crawling in that direction. There are networks and processes much more advanced.

What we’d like to see is the Pataxó village Pará (home of the II International Festival of Technoshamanism), to take one example, with food autonomy and exuberant agroforests and wellsprings, with media and technological autonomy and very soon with autonomous energy. We’d like to see that not just for the Pataxó, but for all the groups in the network (at least). But that depends a lot on time, investment and financing, because these things may seem cheap, but they aren’t. We should remember that corporations, entrepeneurs and land-owners are concentrating their forces on these indigenous villages and encouraging projects that go totally against all of this, that is, applying pressure in order to take their land, incorporate them in the corporate productive system and turn them into low-paid workers, etc.

In May 2017 we met with the Terra Vista Settlement in Arataca (Bahia, Brazil). They invited the leaders of the Pataxó village to become part of the Web of Peoples8 which has this exact project of technological and alimentary autonomy and I see this as a kind of continuation of the proposals which were generated in community meetings in the Pará village during the preparations for the II International Festival of Technoshamanism. Everything depends on an insistent and frequent change in the more structural strata of desire. And when we understand that TV channels like the Globo network reach all these territories, we see the necessity of opening other channels of information and education.

Do you believe that insurgent knowledge and anti-hegemonic epistemologies should gradually take up more space in the universities or is it better for them to remain in the margin?

Fabiane M. BorgesIn a conversation with Joelson, leader of the MST in the Terra Vista settlement he gave the following hint, which was decisive for me: “Technoshamanism is neither the beginning nor the end, it is a medium.” His suggestion is that as a medium, technoshamanism possesses a space of articulation, which rather than answering questions of genesis and purpose functions as a space of interlocution, for making connections, uniting focal points, leveraging movements, expanding concepts and practices concerning itself and other movements – that is, it plays in the middle of the field and facilitates processes.

As yet another network in the “middle”, it negotiates sometimes within institutions and sometimes outside them, sometimes inside academia and sometimes outside it. Since it consists of people from the most diverse areas, it manifests itself in the day to day life of its members. Some work in academia, some in healing, others in a pizzaria. That is, the network is everywhere where its participants are. I particularly like it when we do the festivals autonomously, deciding what to do and how to do it with the people who invite us and we don’t have to do favours or do anything in return for the institutions. But this is not to say that it will always be like that. In fact, the expenses of those who organize the meetings are large and unsustainable. Sometimes the network will be more independent, sometimes more dependent. What it can’t do is stagnate because of the lack of possibilities. Crowdfunding has been an interesting way out, but it’s not enough. It’s necessary sometimes to form partnerships with organizations such as universities so the thing can continue moving in a more consistent and prolonged form, because it’s difficult to rely on people’s good will alone – projects stagnate because they lack the ressources.


4 Davi Kopenawa and Bruce Albert, The Falling Sky, Belknap Press (2013).

5 Fabián Ludueña, La comunidad de los espectros: Antropotecnia, Mino y Davila (2010).

6 Silvia Federici, Caliban and the Witch: Women, the Body and Primitive Accumulation. Brooklyn, NY: Autonomedia (2004). Available here:

7 MST, the “landless worker’s movement” is a social movement in Brazil that fights for workers’ access to land through demands for land reform and direct actions such as establishing settlements on occupied land.

GSoC – Second week of coding

vanitasvitae's blog » englisch | 10:06, Wednesday, 14 June 2017

The second week of GSoC is over! My Jingle implementation progresses.

Most of my efforts went into designing the state machine behind the Jingle and Jingle File Transfer protocol. Because I never really worked with asynchronous communication, let alone network code before, it takes some time to get my head around that.

I’m heavily utilizing the water fall development model – I code until I get stuck at some point I did not consider at all, then I create a new class and start over again. This is very tideous, but I make slow progress towards working Jingle Socks5 Bytestream transports!

All in all I predict, that it’ll take its time to fully complete the Jingle implementation so that it covers every corner case.

Introducing JET!

While working on my Jingle code, I also started writing down my plans for Jingle Encrypted Transfers (jet). My goal is to keep that specification as simple as possible while providing a reasonable way to exchange encrypted data. I decided, that hiding metadata is not in the scope of this document for now, but can later be specified in a seperate document. Contributions and thoughts regarding encrypted Jingle file transfer are welcome :)

Happy Hacking!

Tuesday, 13 June 2017

Failures will occur, even with ansible and version control systems!

Evaggelos Balaskas - System Engineer | 19:57, Tuesday, 13 June 2017


Every SysAdmin, DevOp, SRE, Computer Engineer or even Developer knows that failures WILL occur. So you need to plan with that constant in mind. Failure causes can be present in hardware, power, operating system, networking, memory or even bugs in software. We often call them system failures but it is possible that a Human can be also the cause of such failure!

Listening to the stories on the latest episode of stack overflow podcast felt compelled to share my own oh-shit moment in recent history.

I am writing this article so others can learn from this story, as I did in the process.

Rolling upgrades

I am a really big fun of rolling upgrades.

I am used to work with server farms. In a nutshell that means a lot of servers connected to their own switch behind routers/load balancers. This architecture gives me a great opportunity when it is time to perform operations, like scheduling service updates in working hours.

eg. Update software version 1.2.3 to 1.2.4 on serverfarm001

The procedure is really easy:

  • From the load balancers, stop any new incoming traffic to one of the servers.
  • Monitor all processes on the server and wait for them to terminate.
  • When all connections hit zero, stop the service you want to upgrade.
  • Perform the service update
  • Testing
  • Monitor logs and possible alarms
  • Be ready to rollback if necessary
  • Send some low traffic and try to test service with internal users
  • When everything is OK, tell the load balancers to send more traffic
  • Wait, monitor everything, test, be sure
  • Revert changes on the load balancers so that the specific server can take equal traffic/connection as the others.


This procedure is well established in such environments, and gives us the benefit of working with the whole team in working hours without the need of scheduling a maintenance window in the middle of the night, when low customer traffic is reaching us. During the day, if something is not going as planned, we can also reach other departments and work with them, figuring out what is happening.

Configuration Management

We are using ansible as the main configuration management tool. Every file, playbook, role, task of ansible is under a version control system, so that we can review changes before applying them to production. Viewing diffs from a modern web tool can be a lifesaver in these days.


We also use docker images or virtual images as development machines, so that we can perform any new configuration, update/upgrade on those machines and test it there.

Ansible Inventory

To perform service updates with ansible on servers, we are using the ansible inventory to host some metadata (aka variables) for every machine in a serverfarm. Let me give you an example:

server01 version=1.2.3
server02 version=1.2.3
server03 version=1.2.3
server04 version=1.2.4

And performing the update action via ansible limits


~> ansible-playbook serverfarm001.yml -t update -C -D -l server04


When something is not going as planned, we revert the changes on ansible (version control) and re-push the previous changes on a system. Remember the system is not getting any traffic from the front-end routers.

The Update

I was ready to do the update. Nagios was opened, logs were tailed -f

and then:

~> ansible-playbook serverfarm001.yml -t update

The Mistake

I run the ansible-playbook without limiting the server I wanted to run the update !!!

So all new changes passed through all servers, at once!

On top of that, new configuration broke running software with previous version. When the restart notify of service occurred every server simple stopped!!!

Funny thing, the updated machine server04 worked perfectly, but no traffic was reaching through the load balancers to this server.

Activate Rollback

It was time to run the rollback procedure.

Reverting changes from version control is easy. Took me like a moment or something.
Running again:

~> ansible-playbook serverfarm001.yml

and …

Waiting for Nagios

In 2,5 minutes I had fixed the error and I was waiting for nagios to be green again.

Then … Nothing! Red alerts everywhere!

Oh-Shit Moment

It was time for me to inform everyone what I have done.
Explaining to my colleagues and manager the mistake and trying to figuring out what went wrong with the rollback procedure.


On this crucial moment everything else worked like clockwise.

My colleagues took every action to:

  • informing helpdesk
  • looking for errors
  • tailing logs
  • monitor graphs
  • viewing nagios
  • talking to other people
  • do the leg-work in general

and leaving me in piece with calm to figure out what went wrong.

I felt so proud to be part of the team at that specific moment.

If any of you reading this article: Truly thank all guys and gals .


I bypass ansible and copied the correct configuration to all servers via ssh.
My colleagues were telling me the good news and I was going through one by one of ~xx servers.
In 20minutes everything was back in normal.
And finally nagios was green again.

Blameless Post-Mortem

It was time for post-mortem and of course drafting the company’s incident report.

We already knew what happened and how, but nevertheless we need to write everything down and try to keep a good timeline of all steps.
This is not only for reporting but also for us. We need to figure out what happened exactly, do we need more monitoring tools?
Can we place any failsafes in our procedures? Also why the rollback procedure didnt work.

Fixing Rollback

I am writing this paragraph first, but to be honest with you, it took me some time getting to the bottom of this!

Rollback procedure actually is working as planned. I did a mistake with the version control system.

What we have done is to wrap ansible under another script so that we can select the version control revision number at runtime.
This is actually pretty neat, cause it gives us the ability to run ansible with previous versions of our configuration, without reverting in master branch.

The ansible wrapper asks for revision and by default we run it with [tip].

So the correct way to do rollbacks is:


~> ansible-playbook serverfarm001.yml -rev 238

At the time of problem, I didnt do that. I thought it was better to revert all changes and re-run ansible.
But ansible was running into default mode with tip revision !!

Although I manage pretty well on panic mode, that day my brain was frozen!

Re-Design Ansible

I wrap my head around and tried to find a better solution on performing service updates. I needed to change something that can run without the need of limit in ansible.

The answer has obvious in less than five minutes later:


I need to keep a separated configuration folder and run my ansible playbooks with variable instead of absolute paths.


- copy: src=files/serverfarm001/{{version}} dest=/etc/service/configuration

That will suffice next time (and actually did!). When the service upgrade is finished, We can simple remove the previous configuration folder without changing anything else in ansible.

Ansible Groups

Another (more simplistic) approach is to create a new group in ansible inventory.
Like you do with your staging Vs production environment.


server01 version=1.2.3
server02 version=1.2.3
server03 version=1.2.3

server04 version=1.2.4

and create a new yml file

- hosts: serverfarm001_new

run the ansible-playbook against the new serverfarm001_new group .


A lot of services nowadays have syntax check commands for their configuration.

You can use this validation process in ansible!

here is an example from ansible docs:

# Update sshd configuration safely, avoid locking yourself out
- template:
    src: etc/ssh/sshd_config.j2
    dest: /etc/ssh/sshd_config
    owner: root
    group: root
    mode: '0600'
    validate: /usr/sbin/sshd -t -f %s
    backup: yes

or you can use registers like this:

  - name: Check named
    shell: /usr/sbin/named-checkconf -t /var/named/chroot
    register: named_checkconf
    changed_when: "named_checkconf.rc == 0"
    notify: anycast rndc reconfig


Everyone makes mistakes. I know, I have some oh-shit moments in my career for sure. Try to learn from these failures and educate others. Keep notes and write everything down in a wiki or in whatever documentation tool you are using internally. Always keep your calm. Do not hide any errors from your team or your manager. Be the first person that informs everyone. If the working environment doesnt make you feel safe, making mistakes, perhaps you should think changing scenery. You will make a mistake, failures will occur. It is a well known fact and you have to be ready when the time is up. Do a blameless post-mortem. The only way a team can be better is via responsibility, not blame. You need to perform disaster-recovery scenarios from time to time and test your backup. And always -ALWAYS- use a proper configuration management tool for all changes on your infrastructure.

post scriptum

After writing this draft, I had a talk with some friends regarding the cloud industry and how this experience can be applied into such environment. The quick answer is you SHOULD NOT.

Working with cloud, means you are mostly using virtualization. Docker images or even Virtual Machines should be ephemeral. When it’s time to perform upgrades (system patching or software upgrades) you should be creating new virtual machines that will replace the old ones. There is no need to do it in any other way. You can rolling replacing the virtual machines (or docker images) without the need of stopping the service in a machine, do the upgrade, testing, put it back. Those ephemeral machines should not have any data or logs in the first place. Cloud means that you can (auto) scale as needed it without thinking where the data are.

thanks for reading.

Tag(s): failures, ansible

Friday, 09 June 2017

My Debian Application (anno 1998)

free software - Bits of Freedom | 19:43, Friday, 09 June 2017

Digging around in some files, in a dark corner of my hard drive, I found my application as a Debian Developer, sent the 18th of April 1998:


this is a request for a registration as an official Debian developer. I'm currently working with the Sparc port of Debian and expect to make sure that all Debian packages compiles and is released for Sparc. Attached below is my PGP key. I have read the Debian Social Contract and have also retrieved and looked at the Debian Policy Manual and the Debian Packaging Manual. I'd like to be subscribed to debian-private as, which is my primary mail address. My preferred login name on master is 'jonas', but 'oberg' will work aswell if the preferred login name is already taken. Also attached below is a /signed/ version of my PGP Public key by debian developer Björn Brenander.

Thank you,


Is this the end of decentralisation (Revisited)?

free software - Bits of Freedom | 12:50, Friday, 09 June 2017

In May last year, I wrote an article in which I argued for the need for interoperability trumping the need for centralisation over time. The background to the article was Signal, a secure messaging app built on a centralised platform, and the critique which it had to endure for its decision to stay centralised.

My bet, one year ago, was that by this time, the world would have moved a little bit more towards a federated structure. Are we any closer to it? I would be hesitant to say so. The messaging protocols in use continue to be dominated by WhatsApp, Telegram, Facebook and Signal, neither of which is showing signs of moving towards a federated structure yet.

What might be some indication of convergence is that WhatsApp, Facebook as well as Google Allo, are using the Signal Protocol, even if non-federated using their own servers. So out of the four most commonly used messaging platforms, three are using the same protocol with only Telegram being the outlier.

That doesn't mean federation, but I'll give myself a yellow light for my prediction a year ago. We haven't moved as far as I would have hoped towards federation, but there's been some convergence in the market which may give some indication for what is to come.

Let's set a new reminder for June 2018!

Wednesday, 07 June 2017

Reprogramming the hackerspace: MsgFlo IoT workshops at c-base and Bitraf

Henri Bergius | 00:00, Wednesday, 07 June 2017

This July we’re organizing two hack weekends around MsgFlo and Flowhub:

Both of these will focus on reprogramming the Internet of Things setup of the hackerspace. The aim is to connect more devices at the spaces to the MsgFlo environment, and come up with new connections between systems, and ways to visualize them.

If you’re interested, feel welcome to join! Bring your own laptop. For the Bitraf event, please register on the Meetup page. For c-base, add yourself to the Facebook event.

c-base disco mode

Focus areas

  • MsgFlo IoT and new functionality — connecting internet services with MsgFlo, adding new smarts to the hackerspace IoT setup. Skills: Python, Node.js, Rust
  • Hardware hacking — connecting more devices with MsgFlo. Skills: microcontroller programming, electronics
  • Information displays — new infoscreen designs, data visualizations. Skills: web design, React.js, Django
  • Mobile app — bringing the hackerspace IoT functionality to mobile. Skills: Android
  • Woodworking — new cases, mounts, decorations for various systems. Skills: woodworking, painting

You don’t have to an expert to participate. We’ll be there to help you get up to speed!

More information

c-flo station at c-base

Tuesday, 06 June 2017

Some Thoughts on Python-Like Languages

Paul Boddie's Free Software-related blog » English | 21:04, Tuesday, 06 June 2017

A few different things have happened recently that got me thinking about writing something about Python, its future, and Python-like languages. I don’t follow the different Python implementations as closely as I used to, but certain things did catch my attention over the last few months. But let us start with things closer to the present day.

I was neither at the North American PyCon event, nor at the invitation-only Python Language Summit that occurred as part of that gathering, but has been reporting the proceedings to its subscribers. One of the presentations of particular interest was covered by under the title “Keeping Python competitive”, apparently discussing efforts to “make Python faster”, the challenges faced by different Python implementations, and the limitations imposed by the “canonical” CPython implementation that can frustrate performance improvement efforts.

Here is where this more recent coverage intersects with things I have noticed over the past few months. Every now and again, an attempt is made to speed Python up, sometimes building on the CPython code base and bolting on additional technology to boost performance, sometimes reimplementing the virtual machine whilst introducing similar performance-enhancing technology. When such projects emerge, especially when a large company is behind them in some way, expectations of a much faster Python are considerable.

Thus, when the Pyston reimplementation of Python became more widely known, undertaken by people working at Dropbox (who also happen to employ Python’s creator Guido van Rossum), people were understandably excited. Three years after that initial announcement, however, and those ambitious employees now have to continue that work on their own initiative. One might be reminded of an earlier project, Unladen Swallow, which also sought to perform just-in-time compilation of Python code, undertaken by people working at Google (who also happened to employ Python’s creator Guido van Rossum at the time), which was then abandoned as those people were needed to go and work on other things. Meanwhile, another apparently-broadly-similar project, Pyjion, is being undertaken by people working at Microsoft, albeit as a “side project at work”.

As things stand, perhaps the most dependable alternative implementation of Python, at least if you want one with a just-in-time compiler that is actively developed and supported for “production use”, appears to be PyPy. And this is only because of sustained investment of both time and funding over the past decade and a half into developing the technology and tracking changes in the Python language. Heroically, the developers even try and support both Python 2 and Python 3.

Motivations for Change

Of course, Google, Dropbox and Microsoft presumably have good reasons to try and get their Python code running faster and more efficiently. Certainly, the first two companies will be running plenty of Python to support their services; reducing the hardware demands of delivering those services is definitely a motivation for investigating Python implementation improvements. I guess that there’s enough Python being run at Microsoft to make it worth their while, too. But then again, none of these organisations appear to be resourcing these efforts at anything close to what would be marshalled for their actual products, and I imagine that even similar infrastructure projects originating from such companies (things like Go, for example) have had many more people assigned to them on a permanent basis.

And now, noting the existence of projects like Grumpy – a Python to Go translator – one has to wonder whether there isn’t some kind of strategy change afoot: that it now might be considered easier for the likes of Google to migrate gradually to Go and steadily reduce their dependency on Python than it is to remedy identified deficiencies with Python. Of course, the significant problem remains of translating Python code to Go and still have it interface with code written in C against Python’s extension interfaces, maintaining reliability and performance in the result.

Indeed, the matter of Python’s “C API”, used by extensions written in C for Python programs to use, is covered in the article. As people have sought to improve the performance of their software, they have been driven to rewrite parts of it in C, interfacing these performance-critical parts with the rest of their programs. Although such optimisation techniques make sense and have been a constant presence in software engineering more generally for many decades, it has almost become the path of least resistance when encountering performance difficulties in Python, even amongst the maintainers of the CPython implementation.

And so, alternative implementations need to either extract C-coded functionality and offer it in another form (maybe even written in Python, can you imagine?!), or they need to have a way of interfacing with it, one that could produce difficulties and impair their own efforts to deliver a robust and better-performing solution. Thus, attempts to mitigate CPython’s shortcomings have actually thwarted the efforts of other implementations to mitigate the shortcomings of Python as a whole.

Is “Python” Worth It?

You may well be wondering, if I didn’t manage to lose you already, whether all of these ambitious and brave efforts are really worth it. Might there be something with Python that just makes it too awkward to target with a revised and supposedly better implementation? Again, the article describes sentiments that simpler, Python-like languages might be worth considering, mentioning the Hack language in the context of PHP, although I might also suggest Crystal in the context of Ruby, even though the latter is possibly closer to various functional languages and maybe only bears syntactic similarities to Ruby (although I haven’t actually looked too closely).

One has to be careful with languages that look dynamic but are really rather strict in how types are assigned, propagated and checked. And, should one choose to accept static typing, even with type inference, it could be said that there are plenty of mature languages – OCaml, for instance – that are worth considering instead. As people have experimented with Python-like languages, others have been quick to criticise them for not being “Pythonic”, even if the code one writes is valid Python. But I accept that the challenge for such languages and their implementations is to offer a Python-like experience without frustrating the programmer too much about things which look valid but which are forbidden.

My tuning of a Python program to work with Shedskin needed to be informed about what Shedskin was likely to allow and to reject. As far as I am concerned, as long as this is not too restrictive, and as long as guidance is available, I don’t see a reason why such a Python-like language couldn’t be as valid as “proper” Python. Python itself has changed over the years, and the version I first used probably wouldn’t measure up to what today’s newcomers would accept as Python at all, but I don’t accept that the language I used back in 1995 was not Python: that would somehow be a denial of history and of my own experiences.

Could I actually use something closer to Python 1.4 (or even 1.3) now? Which parts of more recent versions would I miss? And which parts of such ancient Pythons might even be superfluous? In pursuing my interests in source code analysis, I decided to consider such questions in more detail, partly motivated by the need to keep the investigation simple, partly motivated by laziness (that something might be amenable to analysis but more effort than I considered worthwhile), and partly motivated by my own experiences developing Python-based solutions.

A Leaner Python

Usually, after a title like that, one might expect to read about how I made everything in Python statically typed, or that I decided to remove classes and exceptions from the language, or do something else that would seem fairly drastic and change the character of the language. But I rather like the way Python behaves in a fundamental sense, with its classes, inheritance, dynamic typing and indentation-based syntax.

Other languages inspired by Python have had a tendency to diverge noticeably from the general form of Python: Boo, Cobra, Delight, Genie and Nim introduce static typing and (arguably needlessly) change core syntactic constructs; Converge and Mython focus on meta-programming; MyPy is the basis of efforts to add type annotations and “optional static typing” to Python itself. Meanwhile, Serpentine is a project being developed by my brother, David, and is worth looking at if you want to write software for Android, have some familiarity with user interface frameworks like PyQt, and can accept the somewhat moderated type discipline imposed by the Android APIs and the Dalvik runtime environment.

In any case, having already made a few rounds trying to perform analysis on Python source code, I am more interested in keeping the foundations of Python intact and focusing on the less visible characteristics of programs: effectively reading between the lines of the source code by considering how it behaves during execution. Solutions like Shedskin take advantage of restrictions on programs to be able to make deductions about program behaviour. These deductions can be sufficient in helping us understand what a program might actually do when run, as well as helping the compiler make more robust or efficient programs.

And the right kind of restrictions might even help us avoid introducing more disruptive restrictions such as having to annotate all the types in a program in order to tell us similar things (which appears to be one of the main directions of Python in the current era, unfortunately). I would rather lose exotic functionality that I have never really been convinced by, than retain such functionality and then have to tell the compiler about things it would otherwise have a chance of figuring out for itself.

Rocking the Boat

Certainly, being confronted with any list of restrictions, despite the potential benefits, can seem like someone is taking all the toys away. And it can be difficult to deliver the benefits to make up for this loss of functionality, no matter how frivolous some of it is, especially if there are considerable expectations in terms of things like performance. Plenty of people writing alternative Python implementations can attest to that. But there are other reasons to consider a leaner, more minimal, Python-like language and accompanying implementation.

For me, one rather basic reason is merely to inform myself about program analysis, figure out how difficult it is, and hopefully produce a working solution. But beyond that is the need to be able to exercise some level of control over the tools I depend on. Python 2 will in time no longer be maintained by the Python core development community; a degree of agitation has existed for some time to replace it with Python 3 in Free Software operating system distributions. Yet I remain unconvinced about Python 3, particularly as it evolves towards a language that offers “optional” static typing that will inevitably become mandatory (despite assertions that it will always officially be optional) as everyone sprinkles their code with annotations and hopes for the magic fairies and pixies to come along and speed it up, that latter eventuality being somewhat less certain.

There are reasons to consider alternative histories for Python in the form of Python-like languages. People argue about whether Python 3′s Unicode support makes it as suitable for certain kinds of programs as Python 2 has been, with the Mercurial project being notable in its refusal to hurry along behind the Python 3 adoption bandwagon. Indeed, PyPy was devised as a platform for such investigations, being only somewhat impaired in some respects by its rather intensive interpreter generation process (but I imagine there are ways to mitigate this).

Making a language implementation that is adaptable is also important. I like the ability to be able to cross-compile programs, and my own work attempts to make this convenient. Meanwhile, cross-building CPython has been a struggle for many years, and I feel that it says rather a lot about Python core development priorities that even now, with the need to cross-build CPython if it is to be available on mobile platforms like Android, the lack of a coherent cross-building strategy has left those interested in doing this kind of thing maintaining their own extensive patch sets. (Serpentine gets around this problem, as well as the architectural limitations of dropping CPython on an Android-based device and trying to hook it up with the different Android application frameworks, by targeting the Dalvik runtime environment instead.)

No Need for Another Language?

I found it depressingly familiar when David announced his Android work on the Python mobile-sig mailing list and got the following response:

In case you weren't aware, you can just write Android apps and services
in Python, using Kivy.  No need to invent another language.

Fortunately, various other people were more open-minded about having a new toolchain to target Android. Personally, the kind of “just use …” rhetoric reminds me of the era when everyone writing Web applications in Python were exhorted to “just use Zope“, which was a complicated (but admittedly powerful and interesting) framework whose shortcomings were largely obscured and downplayed until enough people had experienced them and felt that progress had to be made by working around Zope altogether and developing other solutions instead. Such zero-sum games – that there is one favoured approach to be promoted, with all others to be terminated or hidden – perhaps inspired by an overly-parroted “only one way to do it” mantra in the Python scene, have been rather damaging to both the community and to the adoption of Python itself.

Not being Python, not supporting every aspect of Python, has traditionally been seen as a weakness when people have announced their own implementations of Python or of Python-like languages. People steer clear of impressive works like PyPy or Nuitka because they feel that these things might not deliver everything CPython does, exactly like CPython does. Which is pretty terrible if you consider the heroic effort that the developer of Nuitka puts in to make his software work as similarly to CPython as possible, even going as far as to support Python 2 and Python 3, just as the PyPy team do.

Solutions like MicroPython have got away with certain incompatibilities with the justification that the target environment is rather constrained. But I imagine that even that project’s custodians get asked whether it can run Django, or whatever the arbitrarily-set threshold for technological validity might be. Never mind whether you would really want to run Django on a microcontroller or even on a phone. And never mind whether large parts of the mountain of code propping up such supposedly essential solutions could actually do with an audit and, in some cases, benefit from being retired and rewritten.

I am not fond of change for change’s sake, but new opportunities often bring new priorities and challenges with them. What then if Python as people insist on it today, with all the extra features added over the years to satisfy various petitioners and trends, is actually the weakness itself? What if the Python-like languages can adapt to these changes, and by having to confront their incompatibilities with hastily-written code from the 1990s and code employing “because it’s there” programming techniques, they can adapt to the changing environment while delivering much of what people like about Python in the first place? What if Python itself cannot?

“Why don’t you go and use something else if you don’t like what Python is?” some might ask. Certainly, Free Software itself is far more important to me than any adherence to Python. But I can also choose to make that other language something that carries forward the things I like about Python, not something that looks and behaves completely differently. And in doing so, at least I might gain a deeper understanding of what matters to me in Python, even if others refuse the lessons and the opportunities such Python-like languages can provide.

Smack v4.2 Introduces OMEMO Support!

vanitasvitae's blog » englisch | 20:13, Tuesday, 06 June 2017

This blogpost doubles as a GSoC update, as well as a version release blog post.

OMEMO Clownfish logo.

OMEMO Clownfish logo (

I have the honour to announce the latest release of Smack! Version 4.2 brings among bug fixes and additional features like Explicit Message Encryption (XEP-0380) and Message Processing Hints (XEP-0334) support for OMEMO Multi-End-Message-and-Object encryption (XEP-0384). OMEMO was developed by Andreas Straub for the Conversations messenger (also as a Google Summer of Code project) in 2015. Since then it got quite popular and drew a lot of attention for XMPP in the media. My hope is that my efforts to develop an easy to use Smack module will result in an even broader adoption.

OMEMO is a protocol for multi-end to multi-end encrypted communication, which utilizes the so called Double Ratchet algorithm. It fulfills amongst the basic requirements of encrypted communication (confidentiality, authenticity and integrity) also the properties of deniability and forward secrecy as well as future secrecy. Smacks implementation brings support for encrypted single and group chats including identity management and session renegotiation.

Current implementations (as well as this one) are based upon the libsignal library developed by OpenWhisperSystems for their popular Signal (formerly TextSecure) messenger. Smacks OMEMO support is structured in two modules. There is smack-omemo (APL licensed), which contains the logic specified in the XEP, as well as some basic cryptographic code. The other module smack-omemo-signal (GPLv3 licensed) implements some abstract methods defined by smack-omemo and encapsulates all function calls to libsignal.

Currently smack-omemo-signal is the only module available that implements the double ratchet functionality, but there has been a lot of discussion on the XMPP Standards Foundations mailing list regarding the use of alternative (more permissively licensed) libraries for OMEMO (like for example Olm, a double ratchet implementation from our friends over at the [matrix] project). So once there is a new specification that enables the use of other libraries, it should be pretty easy to write another module for smack-omemo enabling OMEMO support for clients that are not GPLv3 compatible as well.

Smack’s OMEMO modules are my first bigger contribution to a free software project and started as part of my bachelors thesis. I’m quite happy with the outcome :)

Smack Logo

Also Smack has a new Logo!

That was a lot of talking about OMEMO. Now comes the second functioning of this blog post, my GSoC update.

My project of implementing Jingle File Transfer (XEP-0234) for Smack is going relatively well. I’m stuck at some points where there are ambiguities in the XEP or things I don’t know yet, but most of the time I find another construction site where I can continue my work. Currently I’m implementing stanza providers and elements needed for file transfer. Along the way I steadily create Junit tests to keep the code coverage at a high level. Already it pays off when there are fiddly changes in the element structure.

It’s a real pleasure to learn all the tools I never used before like code coverage reports or mocking and I think Flow does a good job introducing me to them one by one.

That’s all for now. Happy hacking :)

Monday, 05 June 2017

NoFlo: six years of JavaScript dataflow

Henri Bergius | 00:00, Monday, 05 June 2017

Quite a bit of time has passed since my two years of NoFlo post, and it is time to take another look at the state of the NoFlo ecosystem. To start with the basics, NoFlo is a JavaScript implementation of Flow-Based Programming:

In computer programming, flow-based programming (FBP) is a programming paradigm that defines applications as networks of “black box” processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus naturally component-oriented.

With NoFlo software is built by creating graphs that contain reusable components and define the program logic by determining how these components talk to each other.

I started the NoFlo open source project six years ago in Mountain View, California. My aim was to improve the JavaScript programming experience by bringing the FBP paradigm to the ecosystem. At the time the focus was largely on web API servers and extract, transform, load (ETL) programs, but the scope has since expanded quite a bit:

NoFlo is not a web framework or a UI toolkit. It is a way to coordinate and reorganize data flow in any JavaScript application. As such, it can be used for whatever purpose JavaScript can be used for. We know of NoFlo being used for anything from building web servers and build tools, to coordinating events inside GUI applications, driving robots, or building Internet-connected art installations.


Four years ago I wrote how UI was the missing part of NoFlo. Later the same year we launched a Kickstarter campaign to fix this.

Our promise was to design a new way to build software & manage complexity - a visual development environment for all.

<iframe frameborder="0" height="315" scrolling="no" src="" width="560"> </iframe>

This was wildly successful, being at the time the 5th highest funded software crowdfunding campaign. The result — Flowhub — was released to the public in 2014. Big thanks to all of our backers!

Here is how Fast Company wrote about NoFlo:

If NoFlo succeeds, it could herald a new paradigm of web programming. Imagine a world where anyone can understand and build web applications, and developers can focus on programming efficient components to be put to work by this new class of application architects. In a way, this is the same promise as the “learn to code” movement, which wants to teach everyone to be a programmer. Just without the programming.

With Flowhub you can manage full NoFlo projects in your browser. This includes writing components in JavaScript or CoffeeScript, editing graphs and subgraphs, running and introspecting the software, and creating unit tests. You can keep your project in sync with the GitHub integration.

Live programming NoFlo in Flowhub

Celebrating six years of NoFlo

Earlier this year we incorporated Flowhub in Germany. Now, to celebrate six years of NoFlo we’re offering a perpetual 30% discount on Flowhub plans. To lock in the discount, subscribe to a Flowhub plan before June 12th 2017 using the code noflo6.


While NoFlo itself has by no means taken over the world yet, the overall ecosystem it is part of is looking very healthy. Sure, JavaScript fatigue is real, but at the same time it has gone through a pretty dramatic expansion.


As I wrote around the time I started NoFlo, JavaScript has indeed become a universal runtime. It is used on web browsers, server-side, as well as for building mobile and desktop applications. And with NoFlo you can target all those platforms with a single programming model and toolchain.

The de-facto standard for sharing JavaScript libraries — NPM has become the most popular software repository for open source modules. Apart from the hundreds of thousands of other packages, you can also get prebuild NoFlo components from NPM to cover almost any use case.


After a long period of semi-obscurity, our Kickstarter campaign greatly increased the awareness in FBP and dataflow programming. Several open source projects expanded the reach of FBP to other platforms, like MicroFlo to microcontroller programming, or PhpFlo to data conversion pipelines in PHP. To support more of these with common tooling, we standardized the FBP protocol that allows IDEs like Flowhub manage flow-based programs across different runtimes.

Dataflow also saw uptake in the bigger industry. Facebook’s Flux architecture brought flow-based programming to reactive web applications. Google’s TensorFlow made dataflow the way to build machine learning applications. And Google’s Cloud Dataflow uses these techniques for stream processing.

Tooling for flow-based programming

One big area of focus for us has been improving the tooling around NoFlo, as well as the other FBP systems. The FBP protocol has been a big enabler for both building better tools, and for collaboration between different FBP and dataflow systems.

FBP protocol

Here are some of the tools currently available for NoFlo developers:

  • Flowhub — browser-based visual programming IDE for NoFlo and other flow-based systems
  • noflo-nodejs — command-line interface for running NoFlo programs on Node.js
  • noflo-browser-app — template for building browser applications in NoFLo
  • MsgFlo — for running NoFlo and other FBP runtimes as a distributed system
  • fbp-specdata-driven tests for NoFlo and other FBP environments
  • flowtrace — tool for retroactive debugging of NoFlo programs. Supports visual replay with Flowhub

NoFlo 0.8

NoFlo 0.8, released in March this year is probably our most important release so far. It introduced a new component API and greatly clarified the component and network lifecycle.

NoFlo 0.8 program lifecycle

With this release, it is easier than ever to build well-behaved NoFlo components and to deal with the mixture of asynchronous and synchronous data processing. It also brings NoFlo a lot closer to the classical FBP concepts.

As part of the release process, we also fully overhauled the NoFlo documentation and wrote a new data transformations tutorial project.

To find out more about NoFlo 0.8, watch my recent NoFlo talk from Berlin Node.js meetup:

<iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe>

Road to 1.0

In addition to providing lots of new APIs and functionality, NoFlo 0.8 also acts as the transitional release as we head towards the big 1.0 release. In this version we marked many old APIs as deprecated.

NoFlo 1.0 will essentially be 0.8, but with all the deprecated APIs removed. If you haven’t done so already, now is a good time to upgrade your NoFlo projects to 0.8 and make sure everything runs without deprecation warnings.

We intend to release NoFlo 1.0 later this summer once more of our open source component libraries have been updated to utilize the new features.

Sunday, 04 June 2017

DNS Certification Authority Authorization

Evaggelos Balaskas - System Engineer | 14:39, Sunday, 04 June 2017


Reading RFC 6844 you will find the definition of “DNS Certification Authority Authorization (CAA) Resource Record”.

You can read everything here: RFC 6844

So, what is CAA anyhow?

Certificate Authority

In a nutshell you are declaring which your Certificate Authority is for your domain.

It’s another way to verify that the certificate your site is announcing is in fact signed by the issuer that the certificate is showing.

So let’s see what my certificate is showing:



Now, let’s find out what my DNS is telling us:

# dig caa 

;; ANSWER SECTION:        5938    IN  CAA 1 issue ""


You can also use the Qualys ssl server test:


Tag(s): dns, CAA, letsencrypt

FSFE information booth at Linuxwochen Wien and Veganmania MQ

FSFE Fellowship Vienna » English | 11:18, Sunday, 04 June 2017

Gespräche am Infostand
Laufend intensive Beratungsgespräche am Infostand

We organised an FSFE information booth on Linuxwochen Wien from 4 to 6 of May and at Veganmania at the MQ in Vienna from 24 to 27 May. Like every year it went very well and especially at Veganmania we could reach many people not yet familiar with free software. Since during the Veganmania there was a Wikipedia event in Vienna at the same time we even encountered some people from all over the world. For example an FSF activist from Boston in the US.

We had re-stocked our leaflets with new versions of some well received handouts we had in the past and we put the new leaflets on free software programs for specific tasks from the Munich group to good use.

Even if we didn’t have much diversity from volunteers we managed to keep our information desk open to visitors who wanted to ask questions for the whole time the events had opening hours. In some cases we were the last booth to close since we had engaged consultations going on.

At Linuxwochen Wien in addition a local volunteer hosted a well attended 3 hour workshop on image editing with GIMP and an other one for creating new maps in Trigger Rally.

Especially the GIMP workshop did attract many people and there is a clear demand for follow-ups not only on GIMP but on other free designing programs also.

It is noticeable that more an more people are aware of free software and do use it on purpose. If this slight and slow shift is related to our outreach work is uncertain but it is for sure a welcome observation.

From our point of view the most important reason why free software is not the default but still an exotic exception is the fact that it almost never comes pre-installed with new hardware – at least not on laptops or desktop machines. Many people understand this instantly as soon as they are told about common business practices where big corporations do offer better conditions for resellers if they sell their software on all products exclusively. This is almost never an advantage to the customers but profits are usually more important as customers satisfaction and most people are just unaware of the tight grip in which corporations keep them. Sticking with certain products is rarely about satisfaction. Most of the time it would just be to burdensome to try something else. And this obstacles are by design. Unfortunately it is hard to give an impression on what people are missing out if they are not even prepared to try something different. Most people are not very happy with the situation but because all their friends and colleagues share the same frustrations they have the impression that there is no better alternative.

Maybe it would be a promising approach to make testimonials from non-technical people satisfied with free software more available to the public …

postfix TLS & ipv6

Evaggelos Balaskas - System Engineer | 11:15, Sunday, 04 June 2017


smtp Vs smtpd


  • postfix/smtp
    • The SMTP daemon is for sending emails to the Internet (outgoing mail server).
  • postfix/smtpd
    • The SMTP daemon is for receiving emails from the Internet (incoming mail server).


Encryption on mail transport is what we call: opportunistic. If both parties (sender’s outgoing mail server & recipient’s incoming mail server) agree to exchange encryption keys, then a secure connection may be used. Otherwise a plain connection will be established. Plain as in non-encrypted aka cleartext over the wire.

SMTP - Outgoing Traffic

In the begging there where only three options in postfix:

  • none
  • may
  • encrypt

The default option on a Centos 6x is none:

# postconf -d | grep smtp_tls_security_level
smtp_tls_security_level =

Nowadays, postfix supports more options, like:

  • dane
  • verify
  • secure

Here is the basic setup, to enable TLS on your outgoing mail server:

smtp_tls_security_level = may
smtp_tls_loglevel = 1

From postfix v2.6 and later, can you disable weak encryption by selecting the cipher suite and protocols you prefer to use:

smtp_tls_ciphers = export
smtp_tls_protocols = !SSLv2, !SSLv3

You can also define where the file that holds all the root certificates on your linux server is, and thus to verify the certificate that provides an incoming mail server:

smtp_tls_CAfile = /etc/pki/tls/certs/ca-bundle.crt

I dont recommend to go higher with your setup, cause (unfortunately) not everyone is using TLS on their incoming mail server!

SMTPD - Incoming Traffic

To enable TLS in your incoming mail server, you need to provide some encryption keys aka certificates!

I use letsencrypt on my server and the below notes are based on that.

Let’s Encrypt

A quick explanation on what exists on your letsencrypt folder:

# ls -1 /etc/letsencrypt/live/

privkey.pem    ===>  You Private Key
cert.pem       ===>  Your Certificate
chain.pem      ===>  Your Intermediate
fullchain.pem  ===>  Your Certificate with Your Intermediate 


Below you can find the most basic configuration setup you need for your incoming mail server.

smtpd_tls_ask_ccert = yes
smtpd_tls_security_level = may
smtpd_tls_loglevel = 1

Your mail server is asking for a certificate so that a trusted TLS connection can be established between outgoing and incoming mail server.
The servers must exchange certificates and of course, verify them!

Now, it’s time to present your own domain certificate to the world. Offering only your public certificate cert.pem isnt enough. You have to offer both your certificate and the intermediate’s certificate, so that the sender’s mail server can verify you, by checking the digital signatures on those certificates.

smtpd_tls_cert_file = /etc/letsencrypt/live/
smtpd_tls_key_file = /etc/letsencrypt/live/

smtpd_tls_CAfile = /etc/pki/tls/certs/ca-bundle.crt
smtpd_tls_CApath = /etc/pki/tls/certs

CAfile & CApath helps postfix to verify the sender’s certificate by looking on your linux distribution file, that holds all the root certificates.

And you can also disable weak ciphers and protocols:

smtpd_tls_ciphers = high
smtpd_tls_exclude_ciphers = aNULL, MD5, EXPORT
smtpd_tls_protocols = !SSLv2, !SSLv3


Here is an example from gmail:

SMTPD - Incoming Mail from Gmail

You can see that there is a trusted TLS connection established From google:

Jun  4 11:52:07 kvm postfix/smtpd[14150]:
        connect from[2607:f8b0:4003:c06::236]
Jun  4 11:52:08 kvm postfix/smtpd[14150]:
        Trusted TLS connection established from[2607:f8b0:4003:c06::236]:
        TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Jun  4 11:52:09 kvm postfix/smtpd[14150]:
Jun  4 11:52:10 kvm postfix/smtpd[14150]:
        disconnect from[2607:f8b0:4003:c06::236]

SMTP - Outgoing Mail from Gmail

And this is the response To gmail :

Jun  4 12:01:32 kvm postfix/smtpd[14808]:
        initializing the server-side TLS engine
Jun  4 12:01:32 kvm postfix/smtpd[14808]:
        connect from[2a00:1838:20:1::XXXX:XXXX]
Jun  4 12:01:33 kvm postfix/smtpd[14808]:
        setting up TLS connection from[2a00:1838:20:1::XXXX:XXXX]
Jun  4 12:01:33 kvm postfix/smtpd[14808]:[2a00:1838:20:1::XXXX:XXXX]: TLS cipher list "aNULL:-aNULL:ALL:!EXPORT:!LOW:!MEDIUM:+RC4:@STRENGTH:!aNULL:!MD5:!EXPORT:!aNULL"
Jun  4 12:01:33 kvm postfix/smtpd[14808]:
        Anonymous TLS connection established from[2a00:1838:20:1::XXXX:XXXX]:
        TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Jun  4 12:01:35 kvm postfix/smtpd[14808]:
        disconnect from[2a00:1838:20:1::XXXX:XXXX]

As you can see -In both cases (sending/receiving)- the mail servers have established a trusted secure TLSv1.2 connection.
The preferred cipher (in both scenarios) is : ECDHE-RSA-AES128-GCM-SHA256


Tell postfix to prefer ipv6 Vs ipv4 and use TLS if two mail servers support it !

smtp_address_preference = ipv6
Tag(s): postfix, tls, ipv6

Is doing nothing evil?

Free Software – Frank Karlitschek_ | 02:17, Sunday, 04 June 2017

Last weekend I attended the openSUSE conference in Nürnberg. This is a really nice conference. Awesome location, great people, and an overall very relaxed atmosphere. I gave a talk about Nextcloud Security and what we plan to do in the future to make hosting a Nextcloud instance even easier and more secure.

I attended the Saturday keynote which triggered a reaction on my side that I wanted to share. This is only my personal opinion and I’m sure a lot of people think differently. But I can’t help my self.

The talk was about management of infrastructure and automation. It was a really good talk from a technical perspective. Very informative and detailed. But in the middle of the talk, the presenter mentioned that he was involved in building autonomous military submarines. This is of course controversial. I personally wouldn’t get involved in actual weapon development, building things which sole purpose is to kill people. But I understand that people have different opinions here and I can live with such a disagreement.

However, a bit later the presenter mentioned that he also worked for the US intelligence community to build surveillance systems to spy on people on a mass scale. Global, mass scale surveillance, which obviously involves all the people in the room. Which he pointed out as a some kind of joke, noting he might have helped spy on the people in the room.

I’m sorry but I don’t think this is funny at all. The global surveillance systems are undemocratic, in a lot of cases illegal and an attack on the basic human rights of people.

I understand that playing and working with cool technology is fun. And there is a lot of opportunity to do this for secret services and for the military to earn money. But I think we as software developers have a responsibility here. We are building the technology of the future. So we as developers are partly in control of how the world looks like in a few years.

We can’t just say: I close my eyes because this is only a job. Or I don’t want to know how this technology is used. I didn’t ask and no one told me so I’m innocent and not involved. Let me quote a well known saying here: “All that is necessary for the triumph of evil is that good men do nothing.”

I really have a hard time accepting that some people think that building mass surveillance systems is somehow funny or cool. And it is even more troubling to tell this the people your helped put under surveillance into their face and think that this is fun.

Sorry for the rant. But technology matter. Developers matter. Software matters and can be used in good ways and bad ways. We as developers and free software community have a responsibility and should not close our eyes.

Friday, 02 June 2017

Automating the software toolchain

free software - Bits of Freedom | 10:37, Friday, 02 June 2017

Automating the software toolchain

This month, the FSFE is starting a project to facilitate automation in the software toolchain. We're looking for the best practices for free and open source software development that facilitates use and increases the level of automation possible.

When you create a new project on Github, Gitlab, Gitea or many other websites for software development, you're asked for a default license file to be included in the software repository. These files end up looking rather similar, in many cases to the point of being identical, which is a good thing. When things look similar, it's easier for a computer to understand them and find commonalities between them.

For instance, if I write a piece of software which knows what an MIT license looks like, it will likely understand and can tell me which repositories on GitHub have an MIT license, without me needing to manually read the license text of each.

Most software also include some information about the author of the software, either in a copyright header, in a separate file, or in version control metadata. It becomes a bit more difficult for a computer to tell who the authors are, since the information is potentially spread out, and there's often no direct link between the authorship and the actual code authored.

The same is true for a lot of licenses too: not every license is conveyed in a top level LICENSE file. Some license texts are in the copyright header of a file, or in a separate sub directory. Some contain the fill license text, others just references to the full text elsewhere. Some are explicit about the code each license covers, others don't say anything about it, or feel it's implied.

All of this makes it more difficult for a computer to understand the licensing and authorship of software code. Which is unfortunate.

As free and open source software is entrenched as the foundation and default for all new software, we come to rely more and more on automation. I recently wrote about how the FSFE has automated our use of LetsEncrypt certificates for new services, but it actually goes well beyond that. With close to the click of a button, I can pull down a few hundred software projects, compile and install them, and start up a web service.

But there are some inherent risks in this: what if the license of one of the projects change, to one which I'm no longer allowed to use in the way I am? My automated scripts won't warn me about this, they'll carry on as they always have, assuming the project still builds.

And if I look at the top level LICENSE file, and tell my script to make sure it doesn't change, what if there's another license introduced for parts of the project which my script wouldn't know anything about? Perhaps some of the licenses also ask me to name the software and author in the service. A reasonable ask, but I might never know.

There's surely a legal risk here, but more importantly, there's a social risk. The licenses are just mechanisms for conveying expected behavior in our community. It's the behavior which is important; not the license. Still, having my computer be able to say something about the license of a software would certainly help. It would make it possible to create automated tools that actually warned if a license changes, that could create a list of licenses and software used, that could help me give credit to those whose software I rely on.

Over the next months, the FSFE will look at some of the best practices for conveying provenance information in software, in a way that computers can understand it. We will work to collate these practices and work with our community to understand what's most important and relevant.

And, we will make this information available, so software developers can benefit from it, and help us gradually increase the easy by which software can be used. As of today, we're starting to fill out information about the sources of best practices we can find on our WeKan board. You're welcome to look at what we have, and get in touch with me if you have additions to the board or would like to work with us on this!

Thursday, 01 June 2017

Free Software at Church Days

Matthias Kirschner's Web log - fsfe | 00:10, Thursday, 01 June 2017

"Omnis enim res, quae dando non deficit, dum habetur et non datur, nondum habetur, quomodo habenda est."

("For if a thing is not diminished by being shared with others, it is not rightly owned if it is only owned and not shared.")

Saint Augustinus, 397 AD

Conference place for the Churchday in Magdeburg

This Latin quote was written on the FSFE's first t-shirt (see the last picture in this post about Reinhard Müller's work for the FSFE), and was quoted by the moderator Ralf Peter Reimann at a panel discussion during the "Church Days" in Magdeburg. Ralf Peter Reimann is responsible for internet and digital topics at protestant church in the German Rheinland and organised this panel. He guided Harald Geywitz (Telefonica Germany) and myself through a discussion about "Free data, free sharing, being free". The recording of discussion is now online on (in German):

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

Afterwards we had an interesting discussion with the audience, and I would like to thank everybody who was involved their. Self-critical I have to say that we probably went to fast into details, so it was difficult for some people in the audience to follow. Definitely something to improve in future discussions.

Furthermore the video is now also listed amongst others on the FSFE's video wiki. Feel free to browse it, watch and share the videos, give feedback about them, as well as add videos with the FSFE's participation which are not mentioned there.

Tuesday, 30 May 2017

Recording for "Limux the loss of a lighthouse"

Matthias Kirschner's Web log - fsfe | 23:52, Tuesday, 30 May 2017

On 26 May I had the honour to give the keynote at the openSUSE conference. They asked me to talk about the Limux project in Munich. This talk was special talk for me, as in 1999 SUSE 6.0 was my first GNU/Linux distribution and therefore also my start into the Free Software movement. Below you will find the abstract and the recordings of the talk.

Entrance to the venue of the openSUSE conference 2017

Started in 200X the Limux was often cited as the lighthouse project for Free Software in the public administration. Since then we have regularly heard rumours about it. Have they now switched back to proprietary software again or not? Didn't they already migrate back last year? Is it a trend that public administrations aren't using Free Software anymore? Have we failed and is it time to get depressed and stop what we are doing? Do we need new strategies? Those are questions people in our community are confronted with.

We will shed some light on those questions, raise some more, and figure out what we -- as the Free Software community -- can learn from it.

You can either watch or download the video on the CCC's media server or on youtube:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

As I mention in the talk, I would be very interested in discussions what we can learn from the Limux project and how we – as the Free Software movement – can improve. If you have comments, please feel free to start the discussion on the FSFE's general English speaking discussion list.

Sunday, 28 May 2017

No Time To Waste!

Blog – Think. Innovation. | 10:15, Sunday, 28 May 2017

How are we going to feed 9+ billion people by 2050? Last week this question was at the heart of the Thought For Food Summit in Amsterdam and I had the honour and pleasure to be giving a Clinic on Opening Business Models and being invited to co-judge on the special Open Business Model Prize in the TFF Challenge Finals.

In a thrilling finals award ceremony team WasteBuster won the special Open Business Model Prize. Michael Kock, head of IP of main sponsor Syngenta, awarded the prize mainly for the potential the WasteBuster concept has for opening up their business model and open licensing their technology.

As the team put forward in their Finals pitch: they want to share the relatively simple and straightforward process of creating high-quality food powder from fruit and vegetable waste under a Creative Commons license, but left out exactly how they want to do this and what they hope to accomplish with it.

In this post I will be sharing my ideas on what WasteBuster could do with open licensing and opening up their business model in order to get a bigger impact faster. Because with mountains of perfectly good fruit and vegetables being thrown away every day, there is no time to waste!

Note: this is an initial thought experiment; I got only a glimpse of what team WasteBuster is up to by talking to them a few minutes and hearing their 5-minute pitch.

The problem that WasteBuster is solving, is this:

“Roughly one third of the food produced in the world for human consumption every year — approximately 1.3 billion tonnes — gets lost or wasted (”

Of this food 40% to 50% is root crops, fruits and vegetables, which is 585 million tons a year, equivalent to 1.6 million tons per day, or 18.5 tons every second (that is 18,500 kilos of wasted fruit, vegetables and roots every second).

On of the main reasons that this food is wasted, is that there is not enough time for it to reach people who need it: it goes bad in a couple of days.

What team WasteBuster did is find a way to increase the shelf life of this food from 2 days to 2 years by steaming, drying and pulverizing it using a relatively easy process and machinery. The food powder is then put into portion size bags and labeled and can be sold.

Given this great and proven technology, how do we make sure that it is available to anyone who can benefit from it in the shortest possible time, in a way that is beneficial to all stakeholders involved?

I believe a traditional centralized closed operation where one company is in control of designing, developing, manufacturing and distributing/selling the product is not going to lead to the optimal outcome. This process will take too long, marketing will be an enormous expense and manufacturing, finding resellers and shipping will all make the product overly expensive.

The open alternative could be to make all documentation, descriptions, tutorials, building plans and designs available under a Creative Commons By Attribution Share-Alike license (CC-BY-SA). And the product should be designed in such a way that it can be built relatively easy and reliably using locally sourced materials.

The process of prototyping, testing, improving the design could also be done in an open way, initiating and building a community of people solving the same problem. The driving force of this community will be the WasteBuster team, tirelessly improving the process and machinery and communicating with, involving and interacting with the community in every step.

Local food-waste problem-solvers are then empowered to build their own mini-factory and create a viable system of stakeholders around their practice: farmers delivering the food, local companies supplying machine parts and usables (the bags, labels etc.) and local resellers buying the food powder. These problem-solvers can be supported by the community and are encouraged to share their ideas, findings and improvements on-line.

Of course WasteBuster should also be able to earn revenue to keep investing in improving the process and machinery, disseminating the knowledge and supporting the community (as far as it does not support itself). They can do this in the following ways. If a specialized piece of equipment or part is necessary, they can create and sell that directly online, through partner channels and/or local resellers. If special training is necessary to build, maintain, repair and/or operate the machinery, they can provide online and/or local courses, or train-the-trainer programs with certifications, possibly under a franchise model.

The training materials do not need to be under a CC license, but they could be (I am not sure yet about this one). WasteBuster can charge for licensing and certifications using their brand, which should then be protected under a trademark. The brand then stands for guaranteed quality as it comes from the highly-trusted initiators and WasteBuster can also then give access to their channel network for added value. This is partly how Arduino has become successful using a trademarking model with their Open Source Technology.

WasteBuster can also sell additional items that are needed by local enterprises under their own brand, even the complete machinery, assembled or as a kit, for problem-solvers who do not have the time to construct it, but do have the money to buy it. If WasteBuster is successful, it is likely that ‘clone’ products will come on the market, but this is fine: this is actually the ‘bigger faster impact’ at work and is how amongst others Arduino has become so successful in such a short time.

An open model like this, with the fundamental technology openly licensed for benefit to everyone, the value added materials not being openly licensed and the brand being protected with a trademark, could scale up really fast, making the desired impact in just a few years, without having to deal with the traditional lineair growth problems of production, marketing, sales and distribution, while still creating a profitable, sustainable, healthy company.

The challenge is to make an open system where all stakeholders can benefit: fruit and vegetable growers, waste collectors, waste processors, suppliers of equipment, distributors and buyers of the food powder and last but not least: the WasteBuster co-founders and company.

What do you think? Let’s open up the conversation and support the WasteBuster team in their idea process!

– Diderik


Friday, 26 May 2017

Last week of GSoC Community Bonding

vanitasvitae's blog » englisch | 19:12, Friday, 26 May 2017

This is my report for the last week of community bonding. On next tuesday the coding phase officially begins \o/.

I spent my week like I did the one before; writing tests, increasing the codecoverage of my Smack OMEMO module. Today the coverage finally reached the same level as the main codebase, meaning my PR wouldn’t decrease the percentage anymore. Apart from that I’ve read some tips on java.nio usage for file transfer and made myself more familiar with its non-blocking IO concepts.

Throughout the week I took the one or other chance to distract myself from work by pariticipating in OMEMO related discussions on the standards mailing list. I’m quite happy, that there’s a vital discussion on the topic which seems to have a solution in prospect which is acceptable for everyone. Specifying the OMEMO XEP in a way that enables implementations using different crypto libraries is definitely a huge step forward which might bring a broader adoption without leaving those who pioneered and developed the standard standing in the rain (all my subjective opinion). I was really surprised to see developers of the Matrix project participating in the discussion. That reminded me of what the spirit of floss software really is :)

I plan to spent the last days before the coding phase sketching out my projects structure and relaxing a little before the hard work begins. One of my goals is to plan ahead and I really hope to fulfill this goal.

Happy Hacking :)


Monday, 22 May 2017

VGA Signal Generation with the PIC32

Paul Boddie's Free Software-related blog » English | 15:08, Monday, 22 May 2017

It all started after I had designed – and received from fabrication – a circuit board for prototyping cartridges for the Acorn Electron microcomputer. Although some prototyping had already taken place with an existing cartridge, with pins intended for ROM components being routed to drive other things, this board effectively “breaks out” all connections available to a cartridge that has been inserted into the computer’s Plus 1 expansion unit.

Acorn Electron cartridge breakout board

The Acorn Electron cartridge breakout board being used to drive an external circuit

One thing led to another, and soon my brother, David, was interfacing a microcontroller to the Electron in order to act as a peripheral being driven directly by the system’s bus signals. His approach involved having a program that would run and continuously scan the signals for read and write conditions and then interpret the address signals, sending and receiving data on the bus when appropriate.

Having acquired some PIC32 devices out of curiosity, with the idea of potentially interfacing them with the Electron, I finally took the trouble of looking at the datasheet to see whether some of the hard work done by David’s program might be handled by the peripheral hardware in the PIC32. The presence of something called “Parallel Master Port” was particularly interesting.

Operating this function in the somewhat insensitively-named “slave” mode, the device would be able to act like a memory device, with the signalling required by read and write operations mostly being dealt with by the hardware. Software running on the PIC32 would be able to read and write data through this port and be able to get notifications about new data while getting on with doing other things.

So began my journey into PIC32 experimentation, but this article isn’t about any of that, mostly because I put that particular investigation to one side after a degree of experience gave me perhaps a bit too much confidence, and I ended up being distracted by something far more glamorous: generating a video signal using the PIC32!

The Precedents’ Hall of Fame

There are plenty of people who have written up their experiments generating VGA and other video signals with microcontrollers. Here are some interesting examples:

And there are presumably many more pages on the Web with details of people sending pixel data over a cable to a display of some sort, often trying to squeeze every last cycle out of their microcontroller’s instruction processing unit. But, given an awareness of how microcontrollers should be able to take the burden off the programs running on them, employing peripheral hardware to do the grunt work of switching pins on and off at certain frequencies, maybe it would be useful to find examples of projects where such advantages of microcontrollers had been brought to bear on the problem.

In fact, I was already aware of the Maximite “single chip computer” partly through having seen the cloned version of the original being sold by Olimex – something rather resented by the developer of the Maximite for reasons largely rooted in an unfortunate misunderstanding of Free Software licensing on his part – and I was aware that this computer could generate a VGA signal. Indeed, the method used to achieve this had apparently been written up in a textbook for the PIC32 platform, albeit generating a composite video signal using one of the on-chip SPI peripherals. The Colour Maximite uses three SPI channels to generate one red, one green, and one blue channel of colour information, thus supporting eight-colour graphical output.

But I had been made aware of the Parallel Master Port (PMP) and its “master” mode, used to drive LCD panels with eight bits of colour information per pixel (or, using devices with many more pins than those I had acquired, with sixteen bits of colour information per pixel). Would it surely not be possible to generate 256-colour graphical output at the very least?

Information from people trying to use PMP for this purpose was thin on the ground. Indeed, reading again one article that mentioned an abandoned attempt to get PMP working in this way, using the peripheral to emit pixel data for display on a screen instead of a panel, I now see that it actually mentions an essential component of the solution that I finally arrived at. But the author had unfortunately moved away from that successful component in an attempt to get the data to the display at a rate regarded as satisfactory.

Direct Savings

It is one thing to have the means to output data to be sent over a cable to a display. It is another to actually send the data efficiently from the microcontroller. Having contemplated such issues in the past, it was not a surprise that the Maximite and other video-generating solutions use direct memory access (DMA) to get the hardware, as opposed to programs, to read through memory and to write its contents to a destination, which in most cases seemed to be the memory address holding output data to be emitted via a data pin using the SPI mechanism.

I had also envisaged using DMA and was still fixated on using PMP to emit the different data bits to the output circuit producing the analogue signals for the display. Indeed, Microchip promotes the PMP and DMA combination as a way of doing “low-cost controllerless graphics solutions” involving LCD panels, so I felt that there surely couldn’t be much difference between that and getting an image on my monitor via a few resistors on the breadboard.

And so, a tour of different PIC32 features began, trying to understand the DMA documentation, the PMP documentation, all the while trying to get a grasp of what the VGA signal actually looks like, the timing constraints of the various synchronisation pulses, and battle various aspects of the MIPS architecture and the PIC32 implementation of it, constantly refining my own perceptions and understanding and learning perhaps too often that there may have been things I didn’t know quite enough about before trying them out!

Using VGA to Build a Picture

Before we really start to look at a VGA signal, let us first look at how a picture is generated by the signal on a screen:

VGA Picture Structure

The structure of a display image or picture produced by a VGA signal

The most important detail at this point is the central area of the diagram, filled with horizontal lines representing the colour information that builds up a picture on the display, with the actual limits of the screen being represented here by the bold rectangle outline. But it is also important to recognise that even though there are a number of visible “display lines” within which the colour information appears, the entire “frame” sent to the display actually contains yet more lines, even though they will not be used to produce an image.

Above and below – really before and after – the visible display lines are the vertical back and front porches whose lines are blank because they do not appear on the screen or are used to provide a border at the top and bottom of the screen. Such extra lines contribute to the total frame period and to the total number of lines dividing up the total frame period.

Figuring out how many lines a display will have seems to involve messing around with something called the “generalised timing formula”, and if you have an X server like Xorg installed on your system, you may even have a tool called “gtf” that will attempt to calculate numbers of lines and pixels based on desired screen resolutions and frame rates. Alternatively, you can look up some common sets of figures on sites providing such information.

What a VGA Signal Looks Like

Some sources show diagrams attempting to describe the VGA signal, but many of these diagrams are open to interpretation (in some cases, very much so). They perhaps show the signal for horizontal (display) lines, then other signals for the entire image, but they either do not attempt to combine them, or they instead combine these details ambiguously.

For instance, should the horizontal sync (synchronisation) pulse be produced when the vertical sync pulse is active or during the “blanking” period when no pixel information is being transmitted? This could be deduced from some diagrams but only if you share their authors’ unstated assumptions and do not consider other assertions about the signal structure. Other diagrams do explicitly show the horizontal sync active during vertical sync pulses, but this contradicts statements elsewhere such as “during the vertical sync period the horizontal sync must also be held low”, for instance.

After a lot of experimentation, I found that the following signal structure was compatible with the monitor I use with my computer:

VGA Signal Structure

The basic structure of a VGA signal, or at least a signal that my monitor can recognise

There are three principal components to the signal:

  • Colour information for the pixel or line data forms the image on the display and it is transferred within display lines during what I call the visible display period in every frame
  • The horizontal sync pulse tells the display when each horizontal display line ends, or at least the frequency of the lines being sent
  • The vertical sync pulse tells the display when each frame (or picture) ends, or at least the refresh rate of the picture

The voltage levels appear to be as follows:

  • Colour information should be at 0.7V (although some people seem to think that 1V is acceptable as “the specified peak voltage for a VGA signal”)
  • Sync pulses are supposed to be at “TTL” levels, which apparently can be from 0V to 0.5V for the low state and from 2.7V to 5V for the high state

Meanwhile, the polarity of the sync pulses is also worth noting. In the above diagram, they have negative polarity, meaning that an active pulse is at the low logic level. Some people claim that “modern VGA monitors don’t care about sync polarity”, but since it isn’t clear to me what determines the polarity, and since most descriptions and demonstrations of VGA signal generation seem to use negative polarity, I chose to go with the flow. As far as I can tell, the gtf tool always outputs the same polarity details, whereas certain resources provide signal characteristics with differing polarities.

It is possible, and arguably advisable, to start out trying to generate sync pulses and just grounding the colour outputs until your monitor (or other VGA-capable display) can be persuaded that it is receiving a picture at a certain refresh rate and resolution. Such confirmation can be obtained on a modern display by seeing a blank picture without any “no signal” or “input not supported” messages and by being able to activate the on-screen menu built into the device, in which an option is likely to exist to show the picture details.

How the sync and colour signals are actually produced will be explained later on. This section was merely intended to provide some background and gather some fairly useful details into one place.

Counting Lines and Generating Vertical Sync Pulses

The horizontal and vertical sync pulses are each driven at their own frequency. However, given that there are a fixed number of lines in every frame, it becomes apparent that the frequency of vertical sync pulse occurrences is related to the frequency of horizontal sync pulses, the latter occurring once per line, of course.

With, say, 622 lines forming a frame, the vertical sync will occur once for every 622 horizontal sync pulses, or at a rate that is 1/622 of the horizontal sync frequency or “line rate”. So, if we can find a way of generating the line rate, we can not only generate horizontal sync pulses, but we can also count cycles at this frequency, and every 622 cycles we can produce a vertical sync pulse.

But how do we calculate the line rate in the first place? First, we decide what our refresh rate should be. The “classic” rate for VGA output is 60Hz. Then, we decide how many lines there are in the display including those extra non-visible lines. We multiply the refresh rate by the number of lines to get the line rate:

60Hz * 622 = 37320Hz = 37.320kHz

On a microcontroller, the obvious way to obtain periodic events is to use a timer. Given a particular frequency at which the timer is updated, a quick calculation can be performed to discover how many times a timer needs to be incremented before we need to generate an event. So, let us say that we have a clock frequency of 24MHz, and a line rate of 37.320kHz, we calculate the number of timer increments required to produce the latter from the former:

24MHz / 37.320kHz = 24000000Hz / 37320Hz = 643

So, if we set up a timer that counts up to 642 and then upon incrementing again to 643 actually starts again at zero, with the timer sending a signal when this “wraparound” occurs, we can have a mechanism providing a suitable frequency and then make things happen at that frequency. And this includes counting cycles at this particular frequency, meaning that we can increment our own counter by 1 to keep track of display lines. Every 622 display lines, we can initiate a vertical sync pulse.

One aspect of vertical sync pulses that has not yet been mentioned is their duration. Various sources suggest that they should last for only two display lines, although the “gtf” tool specifies three lines instead. Our line-counting logic therefore needs to know that it should enable the vertical sync pulse by bringing it low at a particular starting line and then disable it by bringing it high again after two whole lines.

Generating Horizontal Sync Pulses

Horizontal sync pulses take place within each display line, have a specific duration, and they must start at the same time relative to the start of each line. Some video output demonstrations seem to use lots of precisely-timed instructions to achieve such things, but we want to use the peripherals of the microcontroller as much as possible to avoid wasting CPU time. Having considered various tricks involving specially formulated data that might be transferred from memory to act as a pulse, I was looking for examples of DMA usage when I found a mention of something called the Output Compare unit on the PIC32.

What the Output Compare (OC) units do is to take a timer as input and produce an output signal dependent on the current value of the timer relative to certain parameters. In clearer terms, you can indicate a timer value at which the OC unit will cause the output to go high, and you can indicate another timer value at which the OC unit will cause the output to go low. It doesn’t take much imagination to realise that this sounds almost perfect for generating the horizontal sync pulse:

  1. We take the timer previously set up which counts up to 643 and thus divides the display line period into units of 1/643.
  2. We identify where the pulse should be brought low and present that as the parameter for taking the output low.
  3. We identify where the pulse should be brought high and present that as the parameter for taking the output high.

Upon combining the timer and the OC unit, then configuring the output pin appropriately, we end up with a low pulse occurring at the line rate, but at a suitable offset from the start of each line.

VGA Display Line Structure

The structure of each visible display line in the VGA signal

In fact, the OC unit also proves useful in actually generating the vertical sync pulses, too. Although we have a timer that can tell us when it has wrapped around, we really need a mechanism to act upon this signal promptly, at least if we are to generate a clean signal. Unfortunately, handling an interrupt will introduce a delay between the timer wrapping around and the CPU being able to do something about it, and it is not inconceivable that this delay may vary depending on what the CPU has been doing.

So, what seems to be a reasonable solution to this problem is to count the lines and upon seeing that the vertical sync pulse should be initiated at the start of the next line, we can enable another OC unit configured to act as soon as the timer value is zero. Thus, upon wraparound, the OC unit will spring into action and bring the vertical sync output low immediately. Similarly, upon realising that the next line will see the sync pulse brought high again, we can reconfigure the OC unit to do so as soon as the timer value again wraps around to zero.

Inserting the Colour Information

At this point, we can test the basic structure of the signal and see if our monitor likes it. But none of this is very interesting without being able to generate a picture, and so we need a way of getting pixel information from the microcontroller’s memory to its outputs. We previously concluded that Direct Memory Access (DMA) was the way to go in reading the pixel data from what is usually known as a framebuffer, sending it to another place for output.

As previously noted, I thought that the Parallel Master Port (PMP) might be the right peripheral to use. It provides an output register, confusingly called the PMDIN (parallel master data in) register, that lives at a particular address and whose value is exposed on output pins. On the PIC32MX270, only the least significant eight bits of this register are employed in emitting data to the outside world, and so a DMA destination having a one-byte size, located at the address of PMDIN, is chosen.

The source data is the framebuffer, of course. For various retrocomputing reasons hinted at above, I had decided to generate a picture 160 pixels in width, 256 lines in height, and with each byte providing eight bits of colour depth (specifying how many distinct colours are encoded for each pixel). This requires 40 kilobytes and can therefore reside in the 64 kilobytes of RAM provided by the PIC32MX270. It was at this point that I learned a few things about the DMA mechanisms of the PIC32 that didn’t seem completely clear from the documentation.

Now, the documentation talks of “transactions”, “cells” and “blocks”, but I don’t think it describes them as clearly as it could do. Each “transaction” is just a transfer of a four-byte word. Each “cell transfer” is a collection of transactions that the DMA mechanism performs in a kind of batch, proceeding with these as quickly as it can until it either finishes the batch or is told to stop the transfer. Each “block transfer” is a collection of cell transfers. But what really matters is that if you want to transfer a certain amount of data and not have to keep telling the DMA mechanism to keep going, you need to choose a cell size that defines this amount. (When describing this, it is hard not to use the term “block” rather than “cell”, and I do wonder why they assigned these terms in this way because it seems counter-intuitive.)

You can perhaps use the following template to phrase your intentions:

I want to transfer <cell size> bytes at a time from a total of <block size> bytes, reading data starting from <source address>, having <source size>, and writing data starting at <destination address>, having <destination size>.

The total number of bytes to be transferred – the block size – is calculated from the source and destination sizes, with the larger chosen to be the block size. If we choose a destination size less than the source size, the transfers will not go beyond the area of memory defined by the specified destination address and the destination size. What actually happens to the “destination pointer” is not immediately obvious from the documentation, but for our purposes, where we will use a destination size of one byte, the DMA mechanism will just keep writing source bytes to the same destination address over and over again. (One might imagine the pointer starting again at the initial start address, or perhaps stopping at the end address instead.)

So, for our purposes, we define a “cell” as 160 bytes, being the amount of data in a single display line, and we only transfer one cell in a block. Thus, the DMA source is 160 bytes long, and even though the destination size is only a single byte, the DMA mechanism will transfer each of the source bytes into the destination. There is a rather unhelpful diagram in the documentation that perhaps tries to communicate too much at once, leading one to believe that the cell size is a factor in how the destination gets populated by source data, but the purpose of the cell size seems only to be to define how much data is transferred at once when a transfer is requested.

DMA Transfer Mechanism

The transfer of framebuffer data to PORTB using DMA cell transfers (noting that this hints at the eventual approach which uses PORTB and not PMDIN)

In the matter of requesting a transfer, we have already described the mechanism that will allow us to make this happen: when the timer signals the start of a new line, we can use the wraparound event to initiate a DMA transfer. It would appear that the transfer will happen as fast as both the source and the destination will allow, at least as far as I can tell, and so it is probably unlikely that the data will be sent to the destination too quickly. Once the transfer of a line’s pixel data is complete, we can do some things to set up the transfer for the next line, like changing the source data address to point to the next 160 bytes representing the next display line.

(We could actually set the block size to the length of the entire framebuffer – by setting the source size – and have the DMA mechanism automatically transfer each line in turn, updating its own address for the current line. However, I intend to support hardware scrolling, where the address of the first line of the screen can be adjusted so that the display starts part way through the framebuffer, reaches the end of the framebuffer part way down the screen, and then starts again at the beginning of the framebuffer in order to finish displaying the data at the bottom of the screen. The DMA mechanism doesn’t seem to support the necessary address wraparound required to manage this all by itself.)

Output Complications

Having assumed that the PMP peripheral would be an appropriate choice, I soon discovered some problems with the generated output. Although the data that I had stored in the RAM seemed to be emitted as pixels in appropriate colours, there were gaps between the pixels on the screen. Yet the documentation seemed to vaguely indicate that the PMDIN register was more or less continuously updated. That meant that the actual output signals were being driven low between each pixel, causing black-level gaps and ruining the result.

I wondered if anything could be done about this issue. PMP is really intended as some kind of memory interface, and it isn’t unreasonable for it to only maintain valid data for certain periods of time, modifying control signals to define this valid data period. That PMP can be used to drive LCD panels is merely a result of those panels themselves upholding this kind of interface. For those of you familiar with microcontrollers, the solution to my problem was probably obvious several paragraphs ago, but it needed me to reconsider my assumptions and requirements before I realised what I should have been doing all along.

Unlike SPI, which concerns itself with the bit-by-bit serial output of data, PMP concerns itself with the multiple-bits-at-once parallel output of data, and all I wanted to do was to present multiple bits to a memory location and have them translated to a collection of separate signals. But, of course, this is exactly how normal I/O (input/output) pins are provided on microcontrollers! They all seem to provide “PORT” registers whose bits correspond to output pins, and if you write a value to those registers, all the pins can be changed simultaneously. (This feature is obscured by platforms like Arduino where functions are offered to manipulate only a single pin at once.)

And so, I changed the DMA destination to be the PORTB register, which on the PIC32MX270 is the only PORT register with enough bits corresponding to I/O pins to be useful enough for this application. Even then, PORTB does not have a complete mapping from bits to pins: some pins that are available in other devices have been dedicated to specific functions on the PIC32MX270F256B and cannot be used for I/O. So, it turns out that we can only employ at most seven bits of our pixel data in generating signal data:

PORTB Pin Availability on the PIC32MX270F256B
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

We could target the first byte of PORTB (bits 0 to 7) or the second byte (bits 8 to 15), but either way we will encounter an unmapped bit. So, instead of choosing a colour representation making use of eight bits, we have to make do with only seven.

Initially, not noticing that RPB6 was not available, I was using a “RRRGGGBB” or “332″ representation. But persuaded by others in a similar predicament, I decided to choose a representation where each colour channel gets two bits, and then a separate intensity bit is used to adjust the final intensity of the basic colour result. This also means that greyscale output is possible because it is possible to balance the channels.

The 2-bit-per-channel plus intensity colours

The colours employing two bits per channel plus one intensity bit, perhaps not shown completely accurately due to circuit inadequacies and the usual white balance issues when taking photographs

It is worth noting at this point that since we have left the 8-bit limitations of the PMP peripheral far behind us now, we could choose to populate two bytes of PORTB at once, aiming for sixteen bits per pixel but actually getting fourteen bits per pixel once the unmapped bits have been taken into account. However, this would double our framebuffer memory requirements for the same resolution, and we don’t have that much memory. There may be devices with more than sixteen bits mapped in the 32-bit PORTB register (or in one of the other PORT registers), but they had better have more memory to be useful for greater colour depths.

Back in Black

One other matter presented itself as a problem. It is all very well generating a colour signal for the pixels in the framebuffer, but what happens at the end of each DMA transfer once a line of pixels has been transmitted? For the portions of the display not providing any colour information, the channel signals should be held at zero, yet it is likely that the last pixel on any given line is not at the lowest possible (black) level. And so the DMA transfer will have left a stray value in PORTB that could then confuse the monitor, producing streaks of colour in the border areas of the display, making the monitor unsure about the black level in the signal, and also potentially confusing some monitors about the validity of the picture, too.

As with the horizontal sync pulses, we need a prompt way of switching off colour information as soon as the pixel data has been transferred. We cannot really use an Output Compare unit because that only affects the value of a single output pin, and although we could wire up some kind of blanking in our external circuit, it is simpler to look for a quick solution within the capabilities of the microcontroller. Fortunately, such a quick solution exists: we can “chain” another DMA channel to the one providing the pixel data, thereby having this new channel perform a transfer as soon as the pixel data has been sent in its entirety. This new channel has one simple purpose: to transfer a single byte of black pixel data. By doing this, the monitor will see black in any borders and beyond the visible regions of the display.

Wiring Up

Of course, the microcontroller still has to be connected to the monitor somehow. First of all, we need a way of accessing the pins of a VGA socket or cable. One reasonable approach is to obtain something that acts as a socket and that breaks out the different signals from a cable, connecting the microcontroller to these broken-out signals.

Wanting to get something for this task quickly and relatively conveniently, I found a product at a local retailer that provides a “male” VGA connector and screw-adjustable terminals to break out the different pins. But since the VGA cable also has a male connector, I also needed to get a “gender changer” for VGA that acts as a “female” connector in both directions, thus accommodating the VGA cable and the male breakout board connector.

Wiring up to the broken-out VGA connector pins is mostly a matter of following diagrams and the pin numbering scheme, illustrated well enough in various resources (albeit with colour signal transposition errors in some resources). Pins 1, 2 and 3 need some special consideration for the red, green and blue signals, and we will look at them in a moment. However, pins 13 and 14 are the horizontal and vertical sync pins, respectively, and these can be connected directly to the PIC32 output pins in this case, since the 3.3V output from the microcontroller is supposedly compatible with the “TTL” levels. Pins 5 through 10 can be connected to ground.

We have seen mentions of colour signals with magnitudes of up to 0.7V, but no explicit mention of how they are formed has been presented in this article. Fortunately, everyone is willing to show how they converted their digital signals to an analogue output, with most of them electing to use a resistor network to combine each output pin within a channel to produce a hopefully suitable output voltage.

Here, with two bits per channel, I take the most significant bit for a channel and send it through a 470ohm resistor. Meanwhile, the least significant bit for the channel is sent through a 1000ohm resistor. Thus, the former contributes more to the magnitude of the signal than the latter. If we were only dealing with channel information, this would be as much as we need to do, but here we also employ an intensity bit whose job it is to boost the channels by a small amount, making sure not to allow the channels to pollute each other via this intensity sub-circuit. Here, I feed the intensity output through a 2200ohm resistor and then to each of the channel outputs via signal diodes.

VGA Output Circuit

The circuit showing connections relevant to VGA output (generic connections are not shown)

The Final Picture

I could probably go on and cover other aspects of the solution, but the fundamental aspects are probably dealt with sufficiently above to help others reproduce this experiment themselves. Populating memory with usable image data, at least in this solution, involves copying data to RAM, and I did experience problems with accessing RAM that are probably related to CPU initialisation (as covered in my previous article) and to synchronising the memory contents with what the CPU has written via its cache.

As for the actual picture data, the RGB-plus-intensity representation is not likely to be the format of most images these days. So, to prepare data for output, some image processing is needed. A while ago, I made a program to perform palette optimisation and dithering on images for the Acorn Electron, and I felt that it was going to be easier to adapt the dithering code than it was to figure out the necessary techniques required for software like ImageMagick or the Python Imaging Library. The pixel data is then converted to assembly language data definition statements and incorporated into my PIC32 program.

VGA output from a PIC32 microcontroller

VGA output from a PIC32 microcontroller, featuring a picture showing some Oslo architecture, with the PIC32MX270 being powered (and programmed) by the Arduino Duemilanove, and with the breadboards holding the necessary resistors and diodes to supply the VGA breakout and, beyond that, the cable to the monitor

To demonstrate control over the visible region, I deliberately adjusted the display frequencies so that the monitor would consider the signal to be carrying an image 800 pixels by 600 pixels at a refresh rate of 60Hz. Since my framebuffer is only 256 lines high, I double the lines to produce 512 lines for the display. It would seem that choosing a line rate to try and produce 512 lines has the monitor trying to show something compatible with the traditional 640×480 resolution and thus lines are lost off the screen. I suppose I could settle for 480 lines or aim for 300 lines instead, but I actually don’t mind having a border around the picture.

The on-screen menu showing the monitor's interpretation of the signal

The on-screen menu showing the monitor's interpretation of the signal

It is worth noting that I haven’t really mentioned a “pixel clock” or “dot clock” so far. As far as the display receiving the VGA signal is concerned, there is no pixel clock in that signal. And as far as we are concerned, the pixel clock is only important when deciding how quickly we can get our data into the signal, not in actually generating the signal. We can generate new colour values as slowly (or as quickly) as we might like, and the result will be wider (or narrower) pixels, but it shouldn’t make the actual signal invalid in any way.

Of course, it is important to consider how quickly we can generate pixels. Previously, I mentioned a 24MHz clock being used within the PIC32, and it is this clock that is used to drive peripherals and this clock’s frequency that will limit the transfer speed. As noted elsewhere, a pixel clock frequency of 25MHz is used to support the traditional VGA resolution of 640×480 at 60Hz. With the possibilities of running the “peripheral clock” in the PIC32MX270 considerably faster than this, it becomes a matter of experimentation as to how many pixels can be supported horizontally.

Some Oslo street art being displayed by the PIC32

Some Oslo street art being displayed by the PIC32

For my own purposes, I have at least reached the initial goal of generating a stable and usable video signal. Further work is likely to involve attempting to write routines to modify the framebuffer, maybe support things like scrolling and sprites, and even consider interfacing with other devices.

Naturally, this project is available as Free Software from its own repository. Maybe it will inspire or encourage you to pursue something similar, knowing that you absolutely do not need to be any kind of “expert” to stubbornly persist and to eventually get results!

The development of Global Scale

Free Software – Frank Karlitschek_ | 10:07, Monday, 22 May 2017

The architecture of Nextcloud is a classic Web Application architecture. I picked this architecture 7.5 years ago because it is very well known and is proven to be scaled relatively easily. This usually works with off the shelf technologies like http load balancers, clusters of Linux webservers and clustered databases.

But for many years users and customers asked for ways to distribute a single instance over several datacenters. A lot of users and customers run organizations that are not in one office or sometimes not even in one country or on one continent. So how can the service run distributed over different hosting centers on different continents?

Until now there was no good answer for this requirement.

Over the years I talked with users who experimented with different approaches. Unfortunately, they all didn’t work.

If you talk to storage people how to solve this challenge they say: No problem. Just use a distributed storage system like Gluster, Ceph, Hadoop or other.

If you talk to database people and how to do this they say: No problem! Just use one of the cluster and replication systems for Oracle, MySQL, MariaDB and others.

The challenge is how this all works together. If a user changes a file in Nextcloud then it is necessary that the file is changed, potential encryption keys are updated, log files are written, database tables are changed, email notification are sent, external workflow scripts are executed and a lot of other things happen. All these operations that are triggered by a single file change have to happen in an atomic way. It happens completely or not at all. Using database replication and storage replication independently will lead to a broken state and data loss. Additional problems are that you need the full bandwidth and storage in all data centers. So there is a lot of overhead.

The Global Scale architecture, that we designed, is currently the only solution, as far as I know, which solves this challenge.

Additional benefits are that the storage and database and overall operational costs decreases because simpler, commodity and more standard components can be used.

Another benefit is that the locality of data can be controlled. So if it is a legal requirement that certain files never leave a certain jurisdiction then this can be guaranteed with the right GS Balancer and File Access Control settings.

So far I only talk about the benefit that GS breaks a service down into several data centers. This is only half of the truth. Nextcloud Global Scale can be used in an even more radical way. You could stop using clustered Nextcloud instances in general. Killing all central storage boxes and databases and move completely to commodity hardware. Using only local storage and local databases and local caching. This changes the world completely and makes big storages, SAN, NFS and object store boxes completely obsolete.

The Global Scale architecture idea and implementation were developed over the last year with feedback and ideas from many different people. Storage experts, Database vendors, Linux distribution people, container and orchestration experts for easy automatic deployment and several big customers and users of Nextcloud.

The main inspiration came out of long discussions with the people at DeiC are running the National Research and Education Network of Denmark. DeiC was doing successful experiments with a similar architecture for a while already.

At Nextcloud we are committed to develop everything we do as open source. This includes this feature. Also, if you want to contribute to this architecture then no Contributer License Agreement is needed and you don’t need to transfer any rights to the Nextcloud company.

More information about Nextcloud Global Scale can be found here:

Saturday, 20 May 2017

GSoC: Second, third week of community bonding

vanitasvitae's blog » englisch | 21:56, Saturday, 20 May 2017

Hi all!

This is my report for the second, as well as the first half of the third week of GSoC community bonding, which I spent again working on finalizing my OMEMO code.

I dug deeper into writing test cases, mainly integration tests and I found quite a lot of small, undetectable bugs this way (Yay!). This strengthens my plan to work test driven during my GSoC project. The OMEMO code I currently work on was started as part of my bachelor thesis about 4 to 5 months ago and at this time, I was more concerned about having working code in the end, so I wrote no tests at all. Deploying test cases AFTER the code is already written is not only a tideous task, but its also often very difficult (because the code is not structured properly). So I learned my lesson the hard way :D

During testing I also found another bug in an XMPP server software, which prevents Smack from creating accounts on the server on the fly. Unfortunatelly this bug will not get fixed anymore for the version I use (installed from debian testing repository, which I thought was *reasonable* new), which keeps me from doing proper testing the way its meant to be done. I don’t have the time to compile the server software myselves. Instead, I work around this issue by creating the accounts manually everytime I run the test suite using a small bashscript.

I also had to deal with a really strange bug with file writing and reading. smack-omemo has a set of 4 integration tests, which all write data into a temporary directory. After each test, the directory is deleted to prevent tests influencing eachother. The issue was, that only the first test could read/write to the test directory. All subsequent tests failed for some reason. It took me a long time to notice, that there were two folders created (one in the working directory, another one in the subdirectory of the integration test framework). I am still not really sure what happened. The first folder was logged in all debug output, while files were written (by the first test) to the second filder. I guess it was caused by the temp directory being specified using a relative path, which messed up the tests, which were instanciated by the test framework using reflection. But I’m really not sure about this. Specifying the directory using an absolute path fixed the issue in the end.

Last but not least, me and Flow worked out some more details about my GSoC project (Implementing (encrypted) Jingle file transfer for Smack). The module will most likely be based upon java.nio to be scalable in the future. Flow also emphasized that the API should be as easy to use as possible, but at the same time powerful and extensible, which is a nice challenge (and probably a common one within the XMPP community). My initial plan was to create a XEP for OMEMO encrypted Jingle file transfer. We decided, that it would be of more value, to specify the XEP in a way, which allows arbitrary encryption techniques instead of being OMEMO exclusive.

Currently there is a little bit of tension in the community regarding the OMEMO specification. I really hope (and believe) there is a solution which is suitable of making everybody happy and I’m looking forward to participate in an open discussion :)

Happy hacking!

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle - I came for the code but stayed for the freedom  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Max's weblog  English —  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Marcus's Blog  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog