Planet Fellowship (en)

Wednesday, 26 July 2017

The Mobile Web

Paul Boddie's Free Software-related blog » English | 10:31, Wednesday, 26 July 2017

I was tempted to reply to a comment on’s news article “The end of Flash”, where the following observation was made:

So they create a mobile site with a bit fewer graphics and fewer scripts loading up to try to speed it up.

But I found that I had enough to say that I might as well put it here.

A recent experience I had with one airline’s booking Web site involved an obvious pandering to “mobile” users. But to the designers this seemed to mean oversized widgets on any non-mobile device coupled with a frustratingly sequential mode of interaction, as if Fisher-Price had an enterprise computing division and had been contracted to do the work. A minimal amount of information was displayed at any given time, and even normal widget navigation failed to function correctly. (Maybe this is completely unfair to Fisher-Price as some of their products appear to encourage far more sophisticated interaction.)

And yet, despite all the apparent simplification, the site ran abominably slow. Every – single – keypress – took – ages – to – process. Even in normal text boxes. My desktop machine is ancient and mostly skipped the needless opening and closing animations on widgets because it just isn’t fast enough to notice that it should have been doing them before the time limit for doing them runs out. And despite fewer graphics and scripts, it was still heavy on the CPU.

After fighting my way through the booking process, I was pointed to the completely adequate (and actually steadily improving) conventional site that I’d used before but which was now hidden by the new and shiny default experience. And then I noticed a message about customer feedback and the continued availability of the old site: many of their other customers were presumably so appalled by the new “made for mobile” experience and, with some of them undoubtedly having to use the site for their job, booking travel for their colleagues or customers, they’d let the airline know what they thought. I imagine that some of the conversations were pretty frank.

I suppose that when companies manage to decouple themselves from fads and trends and actually listen to their customers (and not via Twitter), they can be reminded to deliver usable services after all. And I am thankful for the “professional customers” who are presumably all that stand in the way of everyone being obliged to download an “app” to book their flights. Maybe that corporate urge will lead to the next reality check for the airline’s “digital strategists”.

GSoC Week 8: Reworking

vanitasvitae's blog » englisch | 09:57, Wednesday, 26 July 2017

The 8th. week of GSoC is there. The second evaluation phase has begun.

This week I was not as productive as the weeks before. This is due to me having my last (hopefully :D ) bachelors exam. But that is now done, so I can finally give 100% to coding :) .

I made some more progress on my Jingle code rework. Most of the transports code is now finished. I started to rework the descriptions code which includes the base classes as well as the JingleFileTransfer code. I’m very happy with the new design, since it is way more modular and less interlocked than the last iteration. Below you can find an UML-like diagram of the current structure.

UML diagram of the jingle code

UML-like diagram of the current jingle implementation

During the rework I stumbled across a slight ambiguity in the Jingle XEP(s), which made me wonder: There are multiple jingle actions, which denote the purpose of a Jingle stanza (eg. transport-replace to replace the used transport message, session-initiate to initiate a session (duh) and so forth). Now there is the session-info Jingle action, which is used to announce session specific events. This is used for example in RTP sessions to let the peers client ring during a call, or to send the checksum of a file in a file transfer session. My problem with this is, that such use-cases are in my opinion highly description related and instead the description-info action should be used. The description part of a Jingle session is the part that represents the actual purpose of the session (eg. file transfer, video calls etc.).

The session itself is description agnostic, since it only bundles together a set of contents. One content is composed of one description, one transport and optionally one security element. In a content you should be able to combine different description, transport and security components in an arbitrary way. Thats the whole purpose of the Jingle protocol. In my opinion it does no make much sense to denote description related informational stanzas with the session-info action.

My proposal to make more use of the description-info element is also consistent with other uses of *-info actions. The transport-info action for example is used to denote transport related stanzas, while the security-info action is used for security related information.

But why do I even care?

Lets get back to my implementation for that :) . As you can see in the diagram above, I split the different Jingle components into different classes like JingleTransport, JingleSecurity, JingleDescription and so on. Now I’d like to pass component-related jingles down to the respective classes (a transport-info usually only contains information valuable to the transport-component). I’d like to do the same for the JingleDescription. At the moment I have no real “recipient” for the session-info action. It might contain session related infos, but might also be interesting for the description. As a consequence I have to make exceptions for those actions, which make the code more bloated and less logical.

Another point is, that such session-info elements (due to the fact that they target a single content in most cases) often contain a “name” attribute that matches the name of the targeted content. I’d propose to not only replace session-info with description-info, but also specify, that the description-info MUST have one or more content child elements that denote the targeted contents. That would make parsing much easier, since the parser always can expect content elements.

That’s all for now :)

Happy Hacking!

Monday, 24 July 2017

Let's Encrypt - Auto Renewal

Evaggelos Balaskas - System Engineer | 22:03, Monday, 24 July 2017

Let’s Encrypt

I’ve written some posts on Let’s Encrypt but the most frequently question is how to auto renew a certificate every 90 days.


This is my mini how-to, on centos 6 with a custom compiled Python 2.7.13 that I like to run on virtualenv from latest git updated certbot. Not a copy/paste solution for everyone!


Cron doesnt not seem to have something useful to use on comparison to 90 days:


Modification Time

The most obvious answer is to look on the modification time on lets encrypt directory :

eg. domain:

# find /etc/letsencrypt/live/ -type d -mtime +90 -exec ls -ld {} \;

# find /etc/letsencrypt/live/ -type d -mtime +80 -exec ls -ld {} \;

# find /etc/letsencrypt/live/ -type d -mtime +70 -exec ls -ld {} \;

# find /etc/letsencrypt/live/ -type d -mtime +60 -exec ls -ld {} \;

drwxr-xr-x. 2 root root 4096 May 15 20:45 /etc/letsencrypt/live/


# openssl x509 -in <(openssl s_client -connect 2>/dev/null) -noout -enddate


If you have registered your email with Let’s Encrypt then you get your first email in 60 days!


Here are my own custom steps:

#  cd /root/certbot.git
#  git pull origin 

#  source venv/bin/activate && source venv/bin/activate
#  cd venv/bin/

#  monit stop httpd 

#  ./venv/bin/certbot renew --cert-name --standalone 

#  monit start httpd 

#  deactivate


I use monit, you can edit the script accordingly to your needs :



## Update certbot
cd /root/certbot.git
git pull origin 

# Enable Virtual Environment for python
source venv/bin/activate && source venv/bin/activate 

## Stop Apache
monit stop httpd 

sleep 5

## Renewal
./venv/bin/certbot renew  --cert-name ${DOMAIN} --standalone 

## Exit virtualenv

## Start Apache
monit start httpd

All Together

# find /etc/letsencrypt/live/ -type d -mtime +80 -exec /usr/local/bin/ \;

Systemd Timers

or put it on cron

whatever :P

Tag(s): letsencrypt

Tuesday, 18 July 2017

GSoC: Week 7

vanitasvitae's blog » englisch | 16:04, Tuesday, 18 July 2017

This is my update post for the 7th week of GSoC. The next evaluation phase is slowly approaching and it is still a lot of work to do.

This week I started to work on integrating encryption in my Jingle File Transfer code. I’m very pleased, that only very minor changes are required to my OMEMO code. The OmemoManager just has to implement a single interface with two methods and that’s it. The interface should be pretty forward to implement in other encryption methods as well.

Unfortunately the same is not true for the code I wrote during the GSoC. There are many things I haven’t thought about, which require major changes, so it looks like I’ll have to rethink the concept once again. My goal is to make the implementation so flexible, that description (eg. video chat, file transfer…), transport (eg. Socks5, IBB…) and security (XTLS, my Jet spec etc.) can be mixed in arbitrary ways without adding in glue code for new specifications. Flow told me, this is going to get complicated, but I want to try anyway :D . For “safety” reasons, I’ll keep a seperate working branch in case the next iteration does not turn out as I want.

Yesterday Flow found an error in smack-omemo, which caused bundles not getting fetched. The mistake I made was, that a synchronous CarbonListener was registered in the packet-reader thread. This caused the packet-reader thread to timeout on certain messages, even though the stanzas arrived. It is nice to have this out of the way and it was a good lesson about the pitfalls of parallel programming.

While reading the Jingle File Transfer XEP I also found some missing XML schemes and proposed to replace the usage of xs:nonNegativeInteger and xs:positiveNumber with xs:unsignedLong to simplify/unify the process of implementing the XEP.

Thats basically it for this week. Unfortunately I have an upcoming exam at university next week, which means even less free time for me, but I’ll manage that. In the worst case I always have a second try on the exam :)

Happy Hacking!

Tuesday, 11 July 2017

Flowhub IoT hack weekend at c-base: buttons, sensors, the Big Switch

Henri Bergius | 00:00, Tuesday, 11 July 2017

Last weekend we held the c-base IoT hack weekend, focused on the Flowhub IoT platform. This was continuation from the workshop we organized at the Bitraf makerspace a week earlier. Same tools and technologies, but slightly different focus areas.

c-base is one of the world’s oldest hackerspaces and a crashed space station under Berlin. It is also one of the earliest users of MsgFlo with quite a lot of devices connected via MQTT.

Hack weekend debriefing

Hack weekend

Just like at Bitraf, the workshop aimed to add new IoT capabilities to c-base, as well as to increase the number of members who know how to make the station’s setup do new things. For this, we used three primary tools:

Internet of Things

The workshop started in Friday evening after a lecture on nuclear pulse propulsion ended in the main hall. We continued all the way to late Sunday evening with some sleep breaks in between. There is something about c-base that makes you want to work there at night.

Testing a humidity sensor

By Sunday evening, we had built and deployed 15 connected IoT devices, with five additional ones pretty far in development. You can find the source code in the c-flo repository.

Idea wall

Sensor boxes

Quite a lot of c-base was already instrumented when we started the workshop. We had details on electricity consumption, internet traffic, and more. But one thing we didn’t have was information on the physical environment at the station. To solve this, we decided to build a set of sensor boxes that we could deploy in different areas of the hackerspace.

Building sensors

The capabilities shared by all the sensor boxes we deployed were:

  • Temperature
  • Humidity
  • Motion (via passive infrared)

For some areas of interest we provided some additional sensors:

  • Sound level (for the workshop)
  • Light level (for c-lab)
  • Carbon dioxide
  • Door open/closed
  • Gravity

Workshop sensor on a breadboard

We found a set of nice little electrical boxes that provided a convenient housing for these sensor boxes. This way we were able to mount them in proper places quickly. This should also protect them from dust and other elements to some degree.

Installed weltenbaulab sensor

The Big Switch

The lights of the c-base main hall are controllable via MsgFlo, and we have a system called farbgeber to produce pleasing color schemes for any given time.

However, when there are events we need to enable manual control of all lights and sound. To make this “MsgFlo vs. IP lounge” control question clearer, we built a Big Switch to decide which controls the lights:

Big Switch in action

The switch is an old electric mains switch from an office building. It makes a satisfying sound when you turn it, and is big enough that you can see which way the setting is from across the room.

To complement the Big Switch we also added a “c-boom” button to trigger the disco mode in the main hall:

c-boom button

Info screens

One part of the IoT setup was to make statistics and announcements about c-base visible in different areas of the station. We did this by rolling out a set of displays with Raspberry Pi 3s connected to the MsgFlo MQTT environment.

Info screens ready for installing

The announcements shown on the screens range from mission critical information like station power consumption or whether the bar is open, to more fictional ones like the NoFlo-powered space station announcements.

Air lock

We also built an Android version of the info display software, which enabled deploying screens using some old donated tablets.

Info screen tablet


This was another successful workshop. Participants got to do new things, and we got lots of new IoT infrastructure installed around c-base. The Flowhub graph is definitely starting to look populated:

c-base is a graph

We also deployed NASA OpenMCT so that we get a nice overview on the station status. Our telemetry server provides MsgFlo participants that receive data via MQTT, store it in InfluxDB, and then visualize it on the dashboard:

OpenMCT view on c-base

All the c-base IoT software is available on Github:

If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!

Monday, 10 July 2017

GSoC Week 6 – Tests and Excitement

vanitasvitae's blog » englisch | 11:20, Monday, 10 July 2017

Time is flying by. The sixth week is nearly over. I hope I didn’t miscounted so far :)

This week I made some more progress working on the file transfer code. I read the existing StreamInitialization code and found some typos which I fixed. I than took some inspiration from the SI code to improve my Jingle implementation. Most notably I created a class FileTransferHandler, which the client can use to control the file transfer and get some information on its status etc. Most functionality is yet to be implemented, but at least getting notified when the transfer has ended already works. This allowed me to bring the first integration test for basic Jingle File transfer to life. Previously I had the issue, that the transfer was started as a new thread, which was then out of scope, so that the test had no way to tell if and when the transfer succeeded. This is now fixed :)

Other than that integration test, I also worked on creating more junit tests for my Jingle classes and found some more bugs that way. Tests are tedious, but the results are worth the effort. I hope to keep the code coverage of Smack at least on a constant level – already it dropped a little bit with my commits getting merged, but I’m working on correcting that again. While testing, I found a small bug in the SOCKS5 Proxy tests of Smack. Basically there were simultaneous insertions into an ArrayList and a HashSet with a subsequent comparison. This failed under certain circumstances (in my universities network) due to the properties of the set. I fixed the issue by replacing the ArrayList with a LinkedHashSet.

Speaking of tests – I created a “small” test app that utilizes NIO for non-blocking IO operations. I put the word small in quotation marks, because NIO blows up the code by a factor of at least 5. My implementation consists of a server and a client. The client sends a string to the server which than 1337ifies the string and sends it back. The goal of NIO is to use few Threads to handle all the connections at once. It works quite well I’d say. I can handle around 10000 simultaneous connections using a single thread. The next steps will be working NIO into Smack.

Last but not least, I once again got excited about the XMPP community :)
As some of you might now, I started to dig into XMPP roughly 8 months ago as part of my bachelors thesis about OMEMO encryption. Back then I wrote a mail to Daniel Gultsch, asking if he could give me some advice on how to start working on an OMEMO implementation.
Now eight months later, I received a mail from another student basically asking me the same question! I’m blown away by how fast one can go from the one asking to the one getting asked. For me this is another beautiful example of truly working open standards and free software.

Thank you :)

Sunday, 09 July 2017

DRM free Smart TV

tobias_platen's blog | 09:54, Sunday, 09 July 2017

Today is Day against DRM, so I’ll post a short update about a DRM free TV set construction that I have build in the last two month.

My TV set only supports DVB-T and old analogue cable TV. Because I don’t want to buy a new one with even harder DRM and the patented H.265 codec, I am now using Kodi to watch TV. Kodi is free software and runs on a ThinkPad T400, which I also use as a DVD player. I installed Libreboot and removed the internal screen, which caused problems with the external TV set connected via VGA.

Libreboot is a free BIOS replacement which removes the Intel Management Engine. The Intel Management Engine is proprietary malware which includes a back door and some DRM functions. Netflix uses this hardware DRM called the Protected Audio/Video Path on Windows 10 when watching 4K videos. The Thinkpad T400 does not even have an HDMI port, which is known to be encumbered by HDCP, an ineffective DRM that has been cracked.

Instead of using DRM encumbered streaming services such as Netflix, Entertain or Vodafone TV, I still buy DVDs and pay them anonymously with cash. In my home there is a DVB-C connector, which I have connected to a FRITZ!WLAN Repeater DVB-C which streams the TV signal to the ThinkPad. The TV set is switched on and off using a FRITZ!DECT 200 which I control using a python script running on the ThinkPad. I also reuse an old IR remote and an IRDuino to control the ThinkPad.

Welcome to my new Homepage

English on Björn Schießle - I came for the code but stayed for the freedom | 09:04, Sunday, 09 July 2017

Finally I moved my homepage a a complete static page powered by Hugo. Here I want to document some challenges I faced during the transition and how I solved them.

Basic setup

As already said I use Hugo to generate the static sites. My theme is based on Sustain. I did some changes and uploaded my version to GitLab.

I want to have all dependencies like fonts and JavaScript libraries locally, so this was one of the largest changes to the original theme. Further I added a easy way to add some share buttons to a blog post, like you can see at the end of this article. The theme also contains a nice and easy way to add presentations or general slide shows to the webpage, some examples can be seen here. The theme contains a example site which shows all this features.


This was one of the biggest challenges. I had some quite good discussion on my old blog powered by Wordpress so I don’t want to lose this feature completely. There are some solutions for static pages but non of these are satisfying. For example Staticman looks really promising. Sadly it only works with GitHub. Please let me know if you know something similar which doesn’t depend on GitHub.

For now I decided to do two things. By default I add a short text at the end of each article to tell people to send me a e-mail if they want to share or discuss their view on the topic. Additionally I can add to the meta data of each posts a link to a Friendica post. In this case the link will be added at the end of the article, inviting people to discuss the topic on this free, decentralised and federated network. I have chosen Friendica because it allows users to interact with my blog posts not only with a Friendica account but also with a Diaspora, GNU Social, Mastodon or Hubzilla account. If you have a account on one of these networks and want to get updates about new blog posts in order to participate in conversations around it, follow this Friendica account. I also created a more detailed description for people new to the world of free social networking.


After all the questions above where answered and a first version of the new webpage was in place, I had to find a easy way to deploy it. I host the source code of my homepage on GitLab which has a nicely integrated CI service which can be used to deploy the webpage on any server.

Therefore we need to add a CI script called .gitlab-ci.yml to the root of the repository. This script needs to contain following (please adjust the paths):

image: publysher/hugo

  - apt-get update
  - apt-get --yes --force-yes install git ssh rsync
  - git submodule update --init --recursive

  - hugo
  - mkdir "${HOME}/.ssh"
  - echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
  - echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
  - chmod 700 "${HOME}/.ssh/id_rsa"
  - rsync -hrvz --delete --exclude=_ public/
    - public
  - master

We need to create a ssh key-pair to deploy the webpage. For security reasons it is highly recommend to create a ssh key used only for the deployment.

The variables SSH_HOST_KEY and SSH_PRIVATE_KEY need to be set at GitLab in the CI settings. SSH_PRIVATE_KEY contains the private ssh key which is located in the ~/.ssh directory.

To get the right value for SSH_HOST_KEY, we run ssh-keyscan <our-webpage-host>. Once we executed that command, we should see something similar to ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCtwsSpeNV.... Let’s copy this to the SSH_HOST_KEY value in our GitLab settings.

Finally we need to copy the public ssh key to the .ssh/authorized_keys file on the web-server to allow GitLab to access it.

Now we are already done. The next time we push some changes to the Github repository GitLab will build the page and sync it to the web-server.

Using the private key stored in the GitLab settings allows everyone with access to the key to login to our web-server. Something we don’t want. Therefore I recommend to limit the ssh key to only this one rsync command from the .gitlab-ci.yml file. In order to do this, we need to find the exact command send to the web-server by adding -e'ssh -v' to the rsync command.

Executing the rsync command with the additional option should result in something like:

debug1: Sending command: rsync --server -vrze.iLsfxC --delete . /home/schiesbn/websites/

we copy this command to create following .ssh/authorized_keys entry:

command="rsync --server -vrze.iLsfxC --delete . /home/schiesbn/websites/",no-pty,no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Sf/PDty0d0SQPg9b+Duc18RxPGaBKMzlKR0t1Jz+0eMhTkXRDlBMrrkMIdXJFfJTcofh2nyklp9akUnKA4mRBVH6yWHI+j0aDIf5sSC5/iHutXyGZrLih/oMJdkPbzN2+6fs2iTQfI/Y6fYjbMZ+drmTVnQFxWN8D+9qTl49BmfZk6yA1Q2ECIljpcPTld7206+uaLyLDjtYgm90cSivrBTsC4jlkkwwYnCZo+mYK4mwI3On1thV96AYDgnOqCn3Ay9xiemp7jYmMT99JhKISSS2WNQt2p4fVxwJIa6gWuZsgvquP10688aN3a222EfMe25RN+x0+RoRpSW3zdBd

Now it is no longer possible to use the private key, stored at GitLab to login to the web-server or to perform any other command than this specific rsync command.

Interesting observation

I run this static webpage now for a few weeks. During this weeks I got quite some email from people interested in some topic I write about in my blog. This are not new blog articles, but posts which where already online for quite some time. Somehow it looks like more people find this articles after the transition to a static site. Maybe search engines rate the static site higher than the old Wordpress page? I don’t know, maybe it is just a coincidence… but interesting.

Friday, 07 July 2017

PHP Sorting Iterators

Evaggelos Balaskas - System Engineer | 20:24, Friday, 07 July 2017


a few months ago, I wrote an article on RecursiveDirectoryIterator, you can find the article here: PHP Recursive Directory File Listing . If you run the code example, you ‘ll see that the output is not sorted.


Recursive Iterator is actually an object, a special object that we can perform iterations on sequence (collection) of data. So it is a little difficult to sort them using known php functions. Let me give you an example:

$Iterator = new RecursiveDirectoryIterator('./');
foreach ($Iterator as $file)
object(SplFileInfo)#7 (2) {
  string(12) "./index.html"
  string(10) "index.html"

You see here, the iterator is an object of SplFileInfo class.

Internet Answers

Unfortunately stackoverflow and other related online results provide the most complicated answers on this matter. Of course this is not stackoverflow’s error, and it is really a not easy subject to discuss or understand, but personally I dont get the extra fuzz (complexity) on some of the responses.

Back to basics

So let us go back a few steps and understand what an iterator really is. An iterator is an object that we can iterate! That means we can use a loop to walk through the data of an iterator. Reading the above output you can get (hopefully) a better idea.

We can also loop the Iterator as a simply array.


$It = new RecursiveDirectoryIterator('./');
foreach ($It as $key=>$val)
    echo $key.":".$val."n";




It is difficult to sort Iterators, but it is really easy to sort arrays!
We just need to convert the Iterator into an Array:

// Copy the iterator into an array
$array = iterator_to_array($Iterator);

that’s it!


For my needs I need to reverse sort the array by key (filename on a recursive directory), so my sorting looks like:

krsort( $array );

easy, right?

Just remember that you can use ksort before the array is already be defined. You need to take two steps, and that is ok.

Convert to Iterator

After sorting, we need to change back an iterator object format:

// Convert Array to an Iterator
$Iterator = new ArrayIterator($array);

and that’s it !

Full Code Example

the entire code in one paragraph:

// ebal, Fri, 07 Jul 2017 22:01:48 +0300

// Directory to Recursive search
$dir = "/tmp/";

// Iterator Object
$files =  new RecursiveIteratorIterator(
          new RecursiveDirectoryIterator($dir)

// Convert to Array
$Array = iterator_to_array ( $files );
// Reverse Sort by key the array
krsort ( $Array );
// Convert to Iterator
$files = new ArrayIterator( $Array );

// Print the file name
foreach($files as $name => $object)
    echo "$namen";

Tag(s): php, iterator

Tuesday, 04 July 2017

GSoC Week 5: Tests, fallbacks and politics

vanitasvitae's blog » englisch | 19:32, Tuesday, 04 July 2017

This is my blog post for the 5th week of the Google Summer of Code. I passed the first evaluation phase, so now the really hard work can begin.
(Also the first paycheck came in, and I bought a new Laptop. Obviously my only intention is to reduce compile times for Smack to be more productive *cough*).

The last week was mostly spent writing JUnit tests and finding bugs that way. I found that it is really hard to do unit tests for certain methods, which might be an indicator that there are too many side effects in my code and that there is room to improve. Sometimes when I need to save a state as a variable from within a method, I just go the easy way and create a new attribute for that. I should really try to improve on that front.

Also I did try to create an integration test for my jingle file transfer code. Unfortunately the sendFile method creates a new Thread and returns, so I have no real way of knowing when the file transfer is complete for now, which hinders me from creating a proper integration test. My plans are to go with a Future task to solve this issue, but I’ll have to figure out the most optimal way to bring Futures, Threads, (NIO) and the asynchronous Jingle protocol together. This will probably be the topic of the second coding phase :)

The implementation of the Jingles transport fallback functionality works! When a transport method (eg. SOCKS5) fails for some reason (eg. proxy servers are not reachable), then my implementation can fallback to another transport instead. In my case the session switches to an InBandBytestream transport since I have no other transports implemented yet, but theoretically the fallback method will try all available transports in the future.

I started on creating a small client/server application that will utilize NIO to handle multiple connections on a single thread as a small apprentice piece. I hope to get more familiar with NIO to start integrating non-blocking IO into the jingle filetransfer code.

In the last days I got some questions on my OMEMO module, which very nicely illustrated to me, that developing a piece of software does not only mean to write the code, but also maintain it and support the people that end up using it. My focus lays on my GSoC project though, so I mostly tell those people how to fix their issues on their own.

Last but not least a sort of political remark: In the end of the GSoC, students will receive a little gift from Google (typically a Tshirt). Unfortunatelly not all successful students will receive one due to some countries being “restricted”. Among those countries are Russia, Ukraine, Kazakhstan, but also Mexico. It is sad to see, that politics made by few can affect so many.

Working together, regardless of where we come from, where we live and of course who we are, that is something that the world can and should learn from free software development.

Happy Hacking.

Malicious ReplyTo

Evaggelos Balaskas - System Engineer | 07:44, Tuesday, 04 July 2017


Part of my day job is to protect a large mail infrastructure. That means that on a daily basis we are fighting SPAM and try to protect our customers for any suspicious/malicious mail traffic. This is not an easy job. Actually globally is not a easy job. But we are trying and trying hard.


The last couple months, I have started a project on gitlab gathering the malicious ReplyTo from already identified spam emails. I was looking for a pattern or something that I can feed our antispam engines with so that we can identify spam more accurately. It’s doesnt seem to work as i thought. Spammers can alter their ReplyTo in a matter of minutes!


Here is the list for the last couple months: ReplyTo
I will -from time to time- try to update it and hopefully someone can find it useful

Free domains

It’s not much yet, but even with this small sample you can see that ~ 50% of phishing goes back to gmail !


More Info

You can contact me with various ways if you are interested in more details.

Preferably via encrypted email: PGP: ‘ 0×1c8968af8d2c621f
or via DM in twitter: @ebalaskas


I also keep another list, of suspicious fwds
but keep in mind that it might have some false positives.

Tag(s): spam

Flowhub IoT workshop at Bitraf: sensors, access control, and more

Henri Bergius | 00:00, Tuesday, 04 July 2017

I just got back to Berlin from the Bitraf IoT hackathon we organized in Oslo, Norway. This hackathon was the first of two IoT workshops around MsgFlo and Flowhub IoT. The second will be held at c-base in Berlin this coming weekend.

Bitraf and the existing IoT setup

Bitraf is a large non-profit makerspace in the center of Oslo. It provides co-working facilities, as well as labs and a large selection of computer controlled tools for building things. Members have 24/7 access to the space, and are provided with everything needed for CNC milling, laser cutting, 3D-printing and more.

The space uses the Flowhub IoT stack of MsgFlo and Mosquitto for business-critical things like the door locks that members can open with their smartphone.

Bitraf lock system

In addition to access control, they also had various environmental sensors available on the MQTT network.

With the workshop, our aim was to utilize these existing things more, as well as to add new IoT capabilities. And of course to increase the number of Bitraf members with the knowledge to work with the MsgFlo IoT setup.


Being a makerspace, Bitraf already had everything needed for the physical side of the workshop — tons of sensors, WiFi-enabled microcontrollers, tools for building cases and mounting solutions. So the workshop preparations mostly focused on the software side of things.

The primary tools for the workshop were:

To help visualize the data coming from the sensors people were building, I integrated the NASA OpenMCT dashboard with MsgFlo and InfluxDB time series database. This setup is available at the cbeam-telemetry-server project.

OpenMCT at Bitraf

This gave us a way to send data from any interesting sensors in the IoT network to a dashboard and visualize it. Down the line the persisted data can also be interesting for further analysis or machine learning.

Kick-off session

We started the workshop with a quick intro session about Flowhub, MsgFlo, and MQTT development. There is unfortunately no video, but the slides are available:

<iframe allowfullscreen="true" frameborder="0" height="569" mozallowfullscreen="true" src=";loop=false&amp;delayms=3000" webkitallowfullscreen="true" width="960"></iframe>

After the intro, we did a round of all attendees to see what skills people already had, and what they were interested in learning. Then we started collecting ideas what to work on.

Bitraf IoT ideas

People picked their ideas, and the project work started.

Idea session at Bitraf IoT

I’d like to highlight couple of the projects.

New sensors for the makerspace

Teams at work

Building new sensors was a major part of the workshop. There were several projects, all built on top of msgflo-arduino and the ESP8266 microcontroller:

Working on a motion sensor

There was also a project to automatically open and close windows, but this one didn’t get completed over the weekend. You can follow the progress in the altF4 GitHub repo.

Tool locking

All hackerspaces have the problem that people borrow tools and then don’t return them when finished. This means that the next person needing the tool will have to spend time searching for it.

To solve this, the team designed a system that enabled tools to be locked to a wall, with a web interface where members can “check out” a tool they want to use. This way the system constantly knows what tools are in their right places, and which tools are in use, and by who.

You can see the tool lock system in action in this demo video:

<iframe allowfullscreen="" frameborder="0" height="480" src="" width="853"></iframe>

Source code and schematics:

After the hackathon

Before my flight out, we sat down with Jon to review how things went. In general, I think it is clear the event was a success — people got to learn and try new things, and all projects except one were completed during the two days.

Our unofficial goal was to double the number of nodes in the Bitraf Flowhub graph, and I think we succeeded in this:

Bitraf as a graph

Here are couple of comments from the attendees:

Really fun and informative. The development pipeline also seems complete. Made it a lot easier for beginner to get started.

this was a very fantastic hackathon! Lots of interesting things to learn, very enthusiastic participants, great stewardship and we actually got quite a few projects finished. Well done everbody.

In general the development tools we provided worked well. Everybody was able to run the full Flowhub IoT environment on their own machines using the Docker setup we provided. And apart from a couple of corner cases, msgflo-arduino was easy to get going on the NodeMCUs.

With these two, everybody could easily wire up some sensors and see their data in both Flowhub and the OpenMCT dashboard. From the local setup going to production was just a matter of switching the MQTT broker configuration.

If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!

Friday, 30 June 2017

Back to the Hurd

David Boddie - Updates (Full Articles) | 17:09, Friday, 30 June 2017

Last year I looked at Debian GNU/Hurd, using the network installer to set up a working environment in kvm. Since them I haven't really looked at it very much, so when I saw the announcement of the latest release I decided to check it out and see what has changed over the last few months. I also thought it might be interesting to try and run some of my own software on the system to see if there are any compatibility issues I need to be aware of. This resulted in a detour to port some code to Python 3 and a few surprises when code written on a 64-bit system found itself running on a 32-bit system.

A New Installation

As before, I created a blank disk image, downloaded the network installer and booted it using kvm:

qemu-img create hurd-install-2017.qemu 5G
sha512sum debian-hurd-2017-i386-NETINST-1.iso
# Check the hash of this against those listed on this page.
kvm -m 1024M -drive file=hurd-install-2017.qemu,cache=writeback -cdrom debian-hurd-2017-i386-NETINST-1.iso -boot d -net user -net nic

The default pseudo-graphical installation method seems to work well, though the graphical one also worked nicely. The text-based method didn't seem to work at all. After doing all the usual things a Debian installation process requires, such as defining the keyboard layout and partitioning disks, it's possible to boot the hard disk image and get going with GNU/Hurd again. I use the -redir option to allow me to log into a running environment with ssh via a non-standard port on the host machine:

kvm -m 1024M -drive file=hurd-install-2017.qemu,cache=writeback -cdrom debian-hurd-2017-i386-NETINST-1.iso -boot c -net user -net nic -redir tcp:2222::22

The Debian GNU/Hurd Configuration page covered all the compatibility issues I encountered, though some issues mentioned there did not cause problems for me. For example, I didn't need to explicitly enable the swap partition. On the other hand, I needed to reconfigure Xorg, as suggested, to allow any user to start an X session; not the "Console Users Only" option, but the "Anybody" option.

I tried running a few desktop environments to see which of those that run I would like to use, and which of those run acceptably in kvm without any graphics acceleration. Although MATE, LXDE and XFCE4 all run, I found that I preferred LXQt. However, none of these were as responsive as Blackbox which, for the moment, is as much as I need in a window manager.

A Python 3 Diversion

The end result.

With a graphical environment in place, I wanted to try some software I'd written to see if there were any compatibility issues with running it on GNU/Hurd. I decided to try one of my tools for editing retro game maps. However, it turned out that this PyQt 4 application wouldn't run correctly, crashing with a bus error. This seems to be a compatibility problem with Qt 4 because simple tests with widgets would fail with this library while similar tests with Qt 5's widgets worked fine. At this point it seemed like a good idea to port the tool to PyQt 5.

Since PyQt 5 is compatible with versions of Python from 2.6 up to the latest 3.x releases, I could have just tweaked the tool to use PyQt 5 and left it at that. However, I get the impression that many of the developers working with PyQt 5 are using Python 3, so I also thought it would be a good excuse to try and port the tool to Python 3 at the same time.

One of the first things that many people think about when considering porting from Python 2 to Python 3, apart from the removal of the print statement, is the change to the way Unicode strings are handled. In this application we hardly care about Unicode at all because, in the back end modules at least, all our strings contain ASCII characters. However, these strings are really 8-bit strings containing binary data rather than printable text, so we might welcome the opportunity to stop misusing strings for this purpose and embrace Python 3's byte strings (bytes objects). This is where the fun started.

First of all, we have to think about all the places where we open files, ensuring that those files are opened in binary mode, using the "rb" mode string. I've been quite careful over the years to do this for binary files, even though you could get away with using "r" on its own on many platforms. Still, it's good to be explicit and Python 3 now rewards us by returning byte strings. So we now pass these around in our application and process them a bit like the old-style strings. We should still be able to use ord to translate single byte characters to integer values; chr is no longer used for the reverse translation. The problems start when we start slicing up the data.

In Python 2, we can use the subscript or slicing notation to access parts of strings that we want to convert to integer values, perhaps using the struct module to ensure that we are decoding and encoding data consistently. When we access a string in this way, we get a string of zero or more 8-bit characters:

# Python 2:
>>> a = "\x10ABC\0"
>>> print repr(a[0]), repr(a[1:5]), repr(a[5:])
'\x10' 'ABC\x00' ''

In Python 3, using an equivalent byte string, we find that we get something different for the case where we access a single 8-bit character:

# Python 3:
>>> a = b"\x10ABC\0"
>>> print(repr(a[0]), repr(a[1:5]), repr(a[5:]))
16 b'ABC\x00' b''

In some ways it's more convenient to get an integer instead of a single byte string. It means we can remove lots of ord calls. The problem is that it introduces inconsistency in the way we process the data: we can no longer treat single byte accesses in the same way as slices or join a series of single bytes together using the + operator. The work around for this is to use slices for single byte accesses, too, but it seems slightly cumbersome:

# Python 3:
>>> print(repr(a[0:1]), repr(a[1:5]), repr(a[5:]))
b'\x10' b'ABC\x00' b''

This little trap means that we need to be careful in other situations. For example, where we might have iterated over a string to extract the values of each byte, we now need to think of an alternative way to do this:

# Python 2:
>>> a = b"\x10ABC\0"
>>> map(ord, a)
[16, 65, 66, 67, 0]
# Python 3:
>>> a = b"\x10ABC\0"
>>> list(map(ord, a))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: ord() expected string of length 1, but int found

We could use the struct module's unpack function or pass a lambda that returns the value passed to it. Both of these seem a bit unwieldy for the case where we just want to access single bytes sequentially. There's probably an easy way to do this; it's just that I haven't learned the Python 3 idioms for this yet.

We also run into an interesting problem when we want to convert integers back into a byte string. For a list of integers, we use the bytes class as you might expect:

>>> a = [16, 65, 66, 67, 0]
>>> bytes(a)

However, for a single integer, what do we do? Let's try passing the single value:

>>> a = 3
>>> bytes(a)

That's not what we wanted. We can't use the chr function instead because that's now used for creating Unicode strings. The answer is to wrap the value in a list:

>>> a = 3
>>> bytes([a])

The conclusion here seems to be to keep all the values extracted from byte strings in lists and only use slices on them so that we can reconstruct byte strings more easily later. Most of the other problems I encountered were due to the lazy evaluation of built-in functions like map and range. Where appropriate, these had to be wrapped in calls to list.

Converting the GUI code to PyQt 5 was a minor task after the porting to Python 3 since the classes in the QtWidgets module behave more or less the same as before. For example, QFileDialog.getOpenFileName returns a tuple instead of a single file name, but this was quickly fixed, and I could discard a few obsolete calls to Python 2's unicode class.

Python 3's handling of byte strings is a mixed bag. On one hand I can see the benefits of exposing single bytes as integers, and understand that there is a certain logical consistency in expecting developers to use slices everywhere when handling byte strings. On the other hand it seems like a solution based on an idea of theoretical purity more than practicality, and it seems inconsistent with the approach of returning different item types for single and multiple values when accessing what is effectively still a string of characters.

32-Bit Surprise

With the Python 3 porting project out of the way, I turned my attention to a current Python 2 project. I wanted to see if my DUCK tools would run without problems. Initially, everything looked fine, as you might expect from taking something developed on one flavour of Debian and running it on another. However, testing the packages produced by the compiler led to unexpected crashes. To cut a long story short, the problem was due to an inconsistency in the Python type system on architectures of different sizes.

To illustrate the problem, let's assign an integer value to a variable on our 32-bit and 64-bit Python installations. Here's the 64-bit version:

# 64-bit
>>> 0x7fffffff
>>> 0xffffffff

That looks fine. Just what we would expect. Let's see the 32-bit version:

# 32-bit
>>> 0x7fffffff
>>> 0xffffffff

So the second value is a long value in this case. That's useful to know, but it means we cannot rely on Python's type system to give us a single type for values up to the precision of Dalvik's long type. Another related problem is that the struct module defines different sizes for the long type depending on whether the platform is 32-bit or 64-bit.

These issues can be worked around. They help remind us that we need to test our software on different configurations. Incidentally, it seems that the int type is finally unified in Python 3, though the sizes of long integers are still dependent on the platform's underlying architecture.

What's Next?

I'll continue to play with GNU/Hurd for a while. The system seems pretty stable so far, with the only instabilities I've encountered coming from running different graphical environments under Xorg. I'll try to start looking at Hurd-specific features now that I have something I can conveniently dip into from time to time.

Categories: Free Software, Android, Python

A FOSScamp by the beach - fsfe | 08:47, Friday, 30 June 2017

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

Thursday, 29 June 2017

STARTTLS with CRAM-MD5 on dovecot using LDAP

Evaggelos Balaskas - System Engineer | 23:06, Thursday, 29 June 2017


I should have written this post like a decade ago, but laziness got the better of me.

I use TLS with IMAP and SMTP mail server. That means I encrypt the connection by protocol against the mail server and not by port (ssl Vs tls). Although I do not accept any authentication before STARTTLS command is being provided (that means no cleartext passwords in authentication), I was leaving the PLAIN TEXT authentication mechanism in the configuration. That’s not an actual problem unless you are already on the server and you are trying to connect on localhost but I can do better.


I use OpenLDAP as my backend authentication database. Before all, the ldap attribute password must be changed from cleartext to CRAM-MD5

Typing the doveadm command from dovecot with the password method:

# doveadm pw

Enter new password:    test
Retype new password:   test

will return the CRAM-MD5 hash of our password (test)

Then we need to edit our DN (distinguished name) with ldapvi:


userPassword: test


userPassword: {CRAM-MD5}e02d374fde0dc75a17a557039a3a5338c7743304777dccd376f332bee68d2cf6


Dovecot is not only the imap server but also the “Simple Authentication and Security Layer” aka SASL service. That means that imap & smtp are speaking with dovecot for authentication and dovecot uses ldap as the backend. To change AUTH=PLAIN to cram-md5 we need to do the below change:

file: 10-auth.conf


auth_mechanisms = plain


auth_mechanisms = cram-md5

Before restarting dovecot, we need to make one more change. This step took me a couple hours to figure it out! On our dovecot-ldap.conf.ext configuration file, we need to tell dovecot NOT to bind to ldap for authentication but let dovecot to handle the authentication process itself:


# Enable Authentication Binds
# auth_bind = yes


# Enable Authentication Binds
auth_bind = no

To guarantee that the entire connection is protected by TLS encryption, change in 10-ssl.conf the below setting:


ssl = yes


ssl = required

SSL/TLS is always required, even if non-plaintext authentication mechanisms are used. Any attempt to authenticate before SSL/TLS is enabled will cause an authentication failure.

After that, restart your dovecot instance.


# telnet imap

Trying ...
Connected to
Escape character is '^]'.

1 LOGIN test

1 NO [ALERT] Unsupported authentication mechanism.
telnet> clo

That meas no cleartext authentication is permitted


Now the hard part, the mail clients:


My default webmail client since v1.10.1.123 supports CRAM-MD5
To verify that, open your application.ini file under your data folder and search for something like that:

    imap_use_auth_plain = On
    imap_use_auth_cram_md5 = On
    smtp_use_auth_plain = On
    smtp_use_auth_cram_md5 = On

as a bonus, rainloop supports STARTTLS and authentication for imap & smtp, even when talking to





How F-Droid is Bringing Apps to Cuba

Free Software – | 18:23, Thursday, 29 June 2017

Only in 2015, when the government opened the first public WiFi hotspots in the country, did internet access become available to ordinary Cubans. Before that, even though modern mobile phones had already found their way into the country, they were mostly used off-line. Now, all these phones can be connected to the internet. However, at 1.50 CUC per hour, this is still not affordable to most Cubans whose average salary is only 20 CUC per month.

Mobile Phone Shop offering Apps and Updates

So it is not surprising that most Cubans do not use what little expensive bandwidth they have available to download apps, but use it for other things such as communicating with their relatives. If they want apps, they can go to one of the many mobile phone repair shops and get them installed there for a nominal fee.

In order to distinguish themselves from the tough competition, some shops take an interesting approach: They open up their own app store in a local (offline) WiFi network to attract customers.

The only existing technology that allows everybody to open their own app store is called F-Droid. It is a full-blown app store kit that comes with all the tools and documentation required to open up your own store: an app store server, the app to install the apps with on your phone and a curation tool to manage your store with ease.

DroidT’s F-Droid Repository

Normally, the F-Droid app that you can download comes with thousands of useful Free Software apps without advertising and user tracking. These however require an internet connection to be downloaded from F-Droid’s official server. Thankfully, F-Droid allows you to add other repositories to it. These work similar to the package repositories that you might know from GNU/Linux distributions. Everybody can create their own repository, even in an offline network. As long as people are connected to the same WiFi, they can use your repository to install and update your apps.

WiFi Antenna Outside of the Shop

This is exactly what the DroidT shop did that I visited in Sancti Spíritus, Cuba. They run their own repository within their local WiFi network. Currently, they offer more than 2000 apps, mostly games, but also other useful apps, for free to everybody within the range of their WiFi router. Having a worse store location than their competitors, it really helps them to drive people to their shop and to build up their reputation within the local community. So far, they are the only store in their city offering this free and convenient service to their customers.


A Cuban child looking through the print-out of available apps

The apps are downloaded by the store employees once, put into the repository and then are available to an unlimited number of customers without ever needing to connect to the internet again. The DroidT team is even going the last mile by also offering Spanish app metadata such as summaries and app descriptions to make it easier for people to find the app that they are looking for.

At the moment they work on making use of the new screenshot feature that was introduced to the F-Droid app recently, so their customers can browse through the screenshots before deciding whether to install an app or not.

A Screenshot of DroidT’s app categories

Another nice service they offer are app updates that are securely delivered by F-Droid. When somebody who has already downloaded apps from their repository, comes into the range of their WiFi again, F-Droid will check for updates and offer to install the ones that are available. This is a huge advantage in a country where people normally never care about updates at the expense of security.

So it comes at no surprise that their slogan is “BE UPDATE” that even hangs in huge letters right above their desk in their shop. The repository by the way is hosted on the server that you can see directly above the banner with their slogan.

It is great to see that the free technology that the F-Droid community worked on for so many years to liberate Android users is put to unexpected use by talented individuals in other countries like Cuba and helps to make a difference there.

For the sake of completeness however, it should be mentioned that ordinary downloads and F-Droid are not the only ways that Cubans use to get apps. There are weekly data packages that are sent through the country on hard-drives by bus. These Paquete Semanal contain mostly movies and TV shows, but also news and apps.

Then, there is a proprietary app called Zapya that is very popular among young Cubans, because it allows them to wirelessly share apps and other files between two phones. Although F-Droid offers the same functionality, this is not yet widely known and currently being improved to match the simplicity that Zapya offers to Cubans at the moment.

A Cuban sharing apps with Zapya

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

Technoshamanism in Aarhus – rethink ancestrality and technology

agger's Free Software blog | 04:35, Thursday, 29 June 2017


On August 12, 2017 from 14:00 to 22:00 there will be a technoshamanism meeting in Dome of Visions, Aarhus, Denmark.

The purpose of the meeting is to unite people who are interested in the combination of DIY technologies such as free software and permaculture with ancestral, ancestorfuturist and shamanistic practices. We are calling all the cyborgs, witches, heretics, technoshamans, programmers, hackers, artists, alchemists, thinkers and everyone who might be curious to join us and explore the possiblities of combining ancestorfuturism, perspectivism, new and old indigenism im middle of the climate changes of anthropocene.

If you feel attracted by the combination of these terms, techno + shamanism and ancestrality + futurism and if you’re worried about the destruction of the Earth and the hegemony of capital and neoliberal ontologies, this event is for you. In view of Aarhus’ slogan as European Cultural Capital of 2017, the theme of this event could be: Rethink ancestrality and technology!

We welcome all proposals for rituals, musical and artistic performances, talks, discussions and technological workshops. Please send your proposal to

The proposal needs to be short (250 words) with your web site (if any) and a short bio.



The verbal talks will be structured as roundtable discussions with several participants which are recorded and simultaneously live streamed as Internet radio.


  • TECHNOSHAMANISM – What is it?
  • Ancestrality and ancestrofuturism
  • Experiences from the II International Festival and other events
  • Immigration and new ontologies arriving in Europe
  • Self-organizing with free software and DIY technology
  • Your proposal!


A collaborative DIY ritual to end the event – bring costumes, proposals, visual effects, ideas and musical instruments.

We welcome proposals for all kinds of performance, rituals and narratives along the lines of this open call – all proposals to be sent to


When we have received your proposals, we will organize them and publish a detailed program around August 1, for the discussions and workshops as well as for the rituals.


If you don’t live in Aarhus and need accomodation, that can be arranged (for free). Bring your sleeping bag!


This encounter is organized by Carsten Agger, Beatriz Ricci, Fabiane M. Borges and Ouafa Rian.


Tecnoshamanism is an international network for people interested in living out their ideas in everyday life while focusing on open science, open technology, free and DIY cosmological visions and feel the necessity of maintaining a strong connection to the Earth as a living, ecological organism.

In recent years, we have had meetings in Spain, England, Denmark, Ecuador, Colombia, Brazil, Germany, and Switzerland. In November 2016, we had the II International Festival of Tecnoxamanism in the indigenous Pataxó village of Pará in Bahia, Brazil. The purpoose of this meeting is to discuss technoshamanism as outlined above and to strengthen and grow this network, hopefully reaching out to new partners in Denmark and beyond. The network is based in Brazil but draws inspiration from all over the world.

You can find more information on technoshamanism in these articles:


This event is supported by Digital Living Research Commons, Aarhus University.

Wednesday, 28 June 2017

Fourth week of GSoC and OMEMO thoughts

vanitasvitae's blog » englisch | 00:08, Wednesday, 28 June 2017


Evaluation phase is there! Time went by faster than I expected. That’s a good sign, I really enjoy working on Smack (besides when my code does not work :D ).

I spent this week to work on the next iteration of my Jingle code. IBB once again works, but SOCKS5 still misses a tiny bit, which I struggle to find. For some reason the sending thread hangs up and blocks just before sending begins. Its probably a minor issue, which can be fixed by changing one line of code, nevertheless I’m struggeling to find the solution.

Apart from that it turned out, that a bug with smack-omemo which I was earlier (unsuccessfully) trying to solve resurrected from the land of closed bug reports. Under very special conditions pubsub results seem to only arrive in Smack exactly after the result listener timed out. This keeps smack-omemo from fetching device bundles on some servers. I originally thought this was a configuration issue with my server, but it turned out that aTalk (a Jitsi for Android fork which is working on including OMEMO support) faces the exact same issue. Looks like I’ll have to investigate this issue once more.

OMEMO – Reflections

It appears that soon the council will decide on the future of OMEMO. I really hope, that the decision will make everyone happy and that the XMPP community (and – perhaps more important – the users) will benefit from the outcome.

There are multiple options for the direction, which the development of OMEMO can take. While everybody aggrees, that something should happen, it is not clear what.

OMEMO is already rather well deployed in the wild, so it is obviously not a good idea to break compatibility with existing implementations. A few months ago, OMEMO underwent a very minor protocol change to mitigate a vulnerability and even though Conversations (birthplace to OMEMO) made a smooth transition over multiple versions, things broke and confused users. While things like this probably cannot be avoided in a protocol which is alive and changing, it is desirable to avoid breaking stuff for users. Technology is made for users. Keeping them happy should be highest priority.

However, at the moment not every user can benefit from OMEMO since the current specification is based on libsignal, which is licensed under the GPLv3. This makes the integration in premissively licensed software either expensive (buying a license from OWS), or laborious (substitute libsignal with other double ratchet library). While the first option is probably unrealistic, the second option has been further investigated during discussions on the standards mailing list.

Already the “official” specification of OMEMO is based on the Olm library instead of libsignal. Olm more or less resembles OWSs double ratchet specification, which has been published by OWS roughly half a year ago. Both Olm and libsignal got audited, while libsignal got significantly more attention both in the media and among experts. One key difference between both libraries is, that libsignal uses the Extended Triple Diffie Hellman (X3DH) key exchange using the XEdDSA signature scheme. The later allows a signature key to be derived from an encryption key, which spares one key. In order to use this functionality, a special conversion algorithm is used. While apparently this algorithm is not too hard to implement, there is no permissive implementation available in any established, trusty crypto library.

In order to work around this issue, one proposal suggests to replace X3DH completely and switch to another form of key exchange. Personally I’m not sure, whether it is desirable to change a whole (audited) protocol in order to replace a single conversion algorithm. Given the huge echo of the Signal protocol in the media, it is only a matter of time until the conversion algorithm makes its way into approved libraries and frameworks. Software follows the principle of supply and demand and as we can conclude from the mailing list discussion, there is quite a lot of demand even alone in the world of XMPP.

I guess everybody aggrees that it is inconvenient, that the “official” OMEMO XEP does not represent, what is currently implemented in the real world (siacs OMEMO). It is open to debate now, how to continue from here on. One suggestion is to document the current state of siacs OMEMO in a historical XEP and continue development of the protocol in a new one. While starting from scratch offers a lot of new possibilities and allows for a quicker development, it also most definitely implies to completely break compatibility to siacs OMEMO at users expense. XMPP is nearly 20 years old now. I do not believe that we are in a hurry :) .

I think siacs OMEMO should not get frozen in time in favor of OMEMO-NEXT. Users are easily confused with names, so I fear that two competing but incompatible OMEMOs are definitely not the way to go. On the other hand, a new standard with a new name that serves the exact same use case is also a bad idea. OMEMO works. Why sould users switch? Instead I’d like to see a smooth transition to a more developer friendly, permissively implementable protocol with broad currency. Already there are OTR and OpenPGP as competitors. We don’t need even more segmentation (let alone the same name for different protocols). I propose to ditch the current official OMEMO XEP in favor of the siacs XEP and from there on go with Andreas’ suggestion, which allows both libsignal as well as Olm to be used simultaneously. This allows permissive implementations of OMEMO, drives development of a permissively licensed conversion algorithm and does not keep users standing in the rain.

To conclude my reflections: Users who use OMEMO will use OMEMO in two or three years. It is up to the community to decide, whether it will be frozen “historical” siacs OMEMO, or collectively developed, smoothly transitioned and unified OMEMO.

Thank you for your time :)

Tuesday, 27 June 2017

How did the world ever work without Facebook? - fsfe | 19:29, Tuesday, 27 June 2017

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandhi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandhi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

Monday, 26 June 2017

“Join us now” … at SHA2017

English Planet – Dreierlei | 08:28, Monday, 26 June 2017

Summary: For the good vibe, we are planning another round of Free Software song sing-along-sessions at the FSFE village during SHA-camp in August this year. Thanks to Benjamin Wand we even run a project to bring together a choir that performs on stage and engages the audience to do a public crowdsinging. Read the details in this post, spread the word and join us!
Also read about the other projects and the current status of the FSFE village at the end of this post.

If you have been at the FSFE assembly at 33C3 or the year before, you may have seen or even took part in one of our multiple Free Software Song sing-along-sessions. People gathering at our assembly, bringing instruments, singing together and share their love for Free Software.

<figure class="wp-caption alignright" id="attachment_2168" style="width: 295px"><figcaption class="wp-caption-text">“Join us now and share the software. You’ll be free, Hackers!”</figcaption></figure>Because of this good adoption, we are already planning similar sessions at the FSFE village during SHA-camp in August this year. And we will go one step ahead.

Benjamin Wand was so inspired by our sing-along-sessions during 33C3 to compose a full set of music notation for a choir to sing the Free Software song in four voices. At SHA we like to give it a try and start a project to bring together a choir who performs the Free Software song. We reach out to other assemblies to get a stage and a momentum for this. It’s a 2h workshop and a potential live act. This is your chance: Join us now and sing out your love for Free Software!

In preparation you find Benjamin’s music sheet on imslp and on musescore where you can even listen to it. Please find all the other details and updates on the dedicated project page inside the SHA-wiki.

And now to something completely different

We are still preparing our village at SHA-camp to offer you an exciting and inspiring location, dipped into the mindset of Free Software. Our international team is always up for a short or a long talk and sharing knowledge. We will bring the latest promotion material and offer you the ultimate Free Software challenge.

On day 4, I will speak about How to make use of democratic elections for your own purpose. We also love to self-organize more sessions and reach out to other assemblies to make it happen. If you have some place for us or you are affiliated with the FSFE and like to give a talk/session, then contact us and we might be able to organise it.

In any case, it is worth to check our FSFE village page from time to time for updates.

Friday, 23 June 2017

Visiting ProgressBar HackerSpace in Bratislava

Evaggelos Balaskas - System Engineer | 11:34, Friday, 23 June 2017

When traveling, I make an effort to visit the local hackerspace. I understand that this is not normal behavior for many people, but for us (free / opensource advocates) is always a must.

This was my 4th week on Bratislava and for the first time, I had a couple free hours to visit ProgressBar HackerSpace.

For now, they are allocated in the middle of the historical city on the 2nd floor. The entrance is on a covered walkway (gallery) between two buildings. There is a bell to ring and automated (when members are already inside) the door is wide open for any visitor. No need to wait or explain why you are there!

Entering ProgressBar there is no doubt that you are entering a hackerspace.


You can view a few photos by clicking here: ProgressBar - Photos

And you can find ProgressBar on OpenStreet Map

Some cool-notable projects:

  • bitcoin vending machine
  • robot arm to fetch clubmate
  • magic wood to switch on/off lights
  • blinkwall
  • Cool T-shirts

their lab is fool with almost anything you need to play/hack with.

I was really glad to make time and visit them.

On brokeness, the live installer and being nice to people

Elena ``of Valhalla'' | 14:57, Friday, 23 June 2017

On brokeness, the live installer and being nice to people

This morning I've read this

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

Tuesday, 20 June 2017

No Place To Hide

Evaggelos Balaskas - System Engineer | 22:10, Tuesday, 20 June 2017


An Amazing Book!!!

Must Read !!

I’ve listened to the audiobook like in two days.
Couldnt leave it down.

Then organize a CryptoParty to your local hackerspace

Tag(s): books

Third Week of GSoC

vanitasvitae's blog » englisch | 20:05, Tuesday, 20 June 2017

Another week is has passed and the first evaluation phase slowly approaches. While I already fulfilled my goals (Jingle File Transfer using InBandBytestreams and SOCKS5Bytestreams), I still have a lot of work to do. The first working implementation I did is only so much – working. Barely. Now its time to learn from mistakes I made while I constructed my prototype and find better ways to do it in the next iteration. This is what I was up to in the past week and what will keep me from my usual sleep cycle for the coming week(s).

I spent the past week doing ground work and writing utility classes which will later allow me to send jingle actions in a clean way. The prototype implementation had all constructions of Jingle elements inside of the control flow, which made reading the code very hard. This will change in the next iteration.

While I worked on my implementation(s), I detected some errors in the XEPs involved and created pull requests against the xsf/xeps repository. In other spots I found some unclarities, but unfortunately my questions on the xsf chat were left unanswered. In some cases I found the solution myselves though.

Also I began upstreaming some changes and additions to the Smack repository. Parsers and elements of IBB have already been merged, as well as some more additions to the HashManager (XEP-0300) I created earlier, and some tests and fixes for the existing Jingle framework. Still open are my PR for SOCKS5 parsers and the first parts of the Jingle file transfer package.

I also dedicated a tiny little bit of my spare time to a non-GSoC project around a blog post on how to create an OMEMO capable chat client using Smack in less than 200 lines of code. The source code of the example application can be found in the FSFE’s brand new git repository. Unfortunately I also found a small bug in my OMEMO code that I have to fix sometime in the next weeks (nothing crucial, just some annoying faulty behavior).

I plan to spend the coming week working on my Jingle code, so that I have a mostly working framework when the evaluation phase begins.

Thats all for now. Happy Hacking :)

Monday, 19 June 2017

Two hackathons in a week: thoughts on NoFlo and MsgFlo

Henri Bergius | 00:00, Monday, 19 June 2017

Last week I participated in two hackathons, events where a group of strangers would form a team for two or three days and build a product prototype. In the end all teams pitch their prototypes, and the best ones would be given some prizes.

Hackathons are typically organized to get feedback from developers on some new API or platform. Sometimes they’re also organized as a recruitment opportunity.

Apart from the free beer and camaraderie, I like going to hackathons since they’re a great way to battle test the developer tools I build. The time from idea to having to have a running prototype is short, people are used to different ways of working and different toolkits.

If our tools and flow-based programming work as intended, they should be ideal for these kind of situations.

Minds + Machines hackathon and Electrocute

Minds + Machines hackathon was held on a boat and focused on decarbonizing power and manufacturing industries. The main platform to work with was Predix, GE’s PaaS service.

Team Electrocute

Our project was Electrocute, a machine learning system for forecasting power consumption in a changing climate.

1.5°C is the global warming target set by the Paris Agreement. How will this affect energy consumption? What kind of generator assets should utilities deploy to meet these targets? When and how much renevable energy can be utilized?

The changing climate poses many questions to utilities. With Electrocute’s forecasting suite power companies can have accurate answers, on-demand.

Electrocute forecasts

The system was built with a NoFlo web API server talking over MsgFlo with a Python machine learning backend. We also built a frontend where users could see the energy usage forecasts on a heatmap.

NoFlo-Xpress in action

Unfortunately we didn’t win this one.

Recoding Aviation and Skillport

Recoding Aviation was held at hub:raum and focused on improving the air travel experience through usage of open APIs offered by the various participating airports.

Team Skillport

Skillport was our project to make long layovers more bearable by connecting people who’re stuck at the airport at the same time.

Long layovers suck. But there is ONE thing amazing about them: You are surrounded by highly skilled people with interesting stories from all over the world. It sometimes happens that you meet someone randomly - we all have a story like that. But usually we are too shy and lazy to communicate and see how we could create a valuable interaction. You never know if the other person feels the same.

We built a mobile app that turns airports into a networking, cultural exchange and knowledge sharing hub. Users tell each other through the app that they are available to meet and what value they can bring to an interaction.

The app connected with a J2EE API service that then communicated over MsgFlo with NoFlo microservices doing all the interactions with social and airport APIs. We also did some data enrichment in NoFlo to make smart recommendations on meeting venues.

MsgFlo in action

This time our project went well with the judges and we were selected as the winner of the Life in between airports challenge. I’m looking forward to the helicopter ride over Berlin!

Category winners

Skillport also won a space at hub:raum, so this might not be the last you’ll hear of the project…

Lessons learned

Benefits of a message queue architecture

I’ve written before on why to use message queues for microservices, but that post focused more on the benefits for real-life production usage.

The problems and tasks for a system architecture in a hackathon are different. Since the time is short, you want to enable people to work in parallel as much as possible without stepping on each other’s toes. Since people in the team come from different backgrounds, you want to enable a heterogeneous, polyglot architecture where each developer can use the tools they’re most productive with.

MsgFlo is by its nature very suitable for this. Components can be written in any language that supports the message queue used, and we have convenience libraries for many of them. The discovery mechanism makes new microservices appear on the Flowhub graph as soon as they start, enabling services to be wired together quickly.

Mock early, mock often

Mocks are a useful way to provide a microservice to the other team members even before the real implementation is ready.

For example in the GE Predix hackathon, we knew the machine learning team would need quite a bit of time to build their model. Until that point we ran their microservice with a simple msgflo-python component that just gave random() as the forecast.

This way everybody else was able to work with the real interface from the get-go. When the learning model was ready we just replaced that Python service, and everything was live.

Mocks can be useful also in situations where you have a misbehaving third-party API.

Don’t forget tests

While shooting for a full test coverage is probably not realistic within the time constraints of a hackathon, it still makes sense to have at least some “happy path” tests. When you’re working with multiple developers each building a different parts of the service, interface tests serve a dual purpose:

  • They show the other team members how to use your service
  • They verify that your service actually does what it is supposed to

And if you’re using a continuous integration tool like Travis, the tests will help you catch any breakages quickly, and also ensure the services work on a clean installation.

For a message queue architecture, fbp-spec is a great tool for writing and running these interface tests.

Talk with the API providers

The reason API and platform providers organize these events is to get feedback. As a developer that works with tons of different APIs, this is a great opportunity to make sure your ideas for improvement are heard.

On the flip side, this usually also means the APIs are in a pretty early stage, and you may be the first one using them in a real-world project. When the inevitable bugs arise, it is a good to have a channel of communications open with the API provider on site so you can get them resolved or worked around quickly.

Room for improvement

The downside of the NoFlo and MsgFlo stack is that there is still quite a bit of a learning curve. NoFlo documentation is now in a reasonable place, but with Flowhub and MsgFlo we have tons of work ahead on improving the onboarding experience.

Right now it is easy to work with if somebody sets it up properly first, but getting there is a bit tricky. Fixing this will be crucial for enabling others to benefit from these tools as well.

Friday, 16 June 2017

Travel piecepack v0.1

Elena ``of Valhalla'' | 16:06, Friday, 16 June 2017

Travel piecepack v0.1


A set of generic board game pieces is nice to have around in case of a sudden spontaneous need of gaming, but carrying my full set takes some room, and is not going to fit in my daily bag.

I've been thinking for a while that an half-size set could be useful, and between yesterday and today I've actually managed to do the first version.

It's (2d) printed on both sides of a single sheet of heavy paper, laminated and then cut, comes with both the basic suites and the playing card expansion and fits in a mint tin divided by origami boxes.

It's just version 0.1 because there are a few issues: first of all I'm not happy with the manual way I used to draw the page: ideally it would have been programmatically generated from the same svg files as the 3d piecepack (with the ability to generate other expansions), but apparently reading paths from an svg and writing it in another svg is not supported in an easy way by the libraries I could find, and looking for it was starting to take much more time than just doing it by hand.

I also still have to assemble the dice; in the picture above I'm just using the ones from the 3d-printed set, but they are a bit too big and only four of them fit in the mint tin. I already have the faces printed, so this is going to be fixed in the next few days.

Source files are available in the same git repository as the 3d-printable piecepack, with the big limitation mentioned above; updates will also be pushed there, just don't hold your breath for it :)

Thursday, 15 June 2017

KDE Applications 17.08 Schedule finalized

TSDgeos' blog | 21:50, Thursday, 15 June 2017

It is available at the usual place

Dependency freeze is in 4 weeks and Feature Freeze in 5 weeks, so hurry up!

Wednesday, 14 June 2017

Tutorial: Home-made OMEMO client

vanitasvitae's blog » englisch | 22:19, Wednesday, 14 June 2017

The german interior minister conference recently decided that the best way to fight terrorism is passing new laws that allow the government to demand access to communication from messengers like WhatsApp and co. Very important: Messengers like WhatsApp. Will even free software developers see requests to change their messengers to allow government access to communications in the future? If it comes so far, how are we then still possible to protect our communications?

The answer could be: Build your own messenger. I want to demonstrate, how simple it is to create a very basic messenger that allows you to send and receive end-to-end encrypted text messages via XMPP using Smack. We will use Smacks latest new feature – OMEMO support to create a very simple XMPP based command line chat application that uses state of the art encryption. I assume, that you all know, what XMPP is. If not, please read it up on Wikipedia. Smack is a java library that makes it easy to use XMPP in an application. OMEMO is basically the Signal protocol for XMPP.

So lets hop straight into it.
In my example, I import smack as a gradle dependency. That looks like this:

apply plugin: 'java'
apply plugin: 'idea'

repositories {
    maven {
        url ''

ext {

dependencies {
    compile "org.igniterealtime.smack:smack-java7:$smackVersion"
    compile "org.igniterealtime.smack:smack-omemo-signal:$smackVersion"
    compile "org.igniterealtime.smack:smack-resolver-dnsjava:$smackVersion"
    compile "org.igniterealtime.smack:smack-tcp:$smackVersion"

//Pack dependencies into the jar
jar {
    from(configurations.compile.collect { it.isDirectory() ? it : zipTree(it) }) {
    exclude "META-INF/*.SF"
    exclude "META-INF/LICENSE"
    manifest {
            'Main-Class': 'Messenger'

Now we can start the main function of our client. We need to create a connection to a server and log in to go online. Lets assume, that the user passes username and password as arguments to our main function. For sake of simplicity, we’ll not catch any errors like wrong number of parameters etc. Also we want to get notified of incoming chat messages and we want to send messages to others.

public class Messenger {

    private AbstractXMPPConnection connection;
    private static Scanner scanner;

    public static void main(String[] args) throws Exception {
        String username = args[0];
        String password = args[1];
        Messenger messenger = new Messenger(username, password);

        scanner = new Scanner(;
        while(true) {
            String input = scanner.nextLine();

            if (input.startsWith("/quit")) {
            if (input.isEmpty()) {

    public Messenger(String username, String password) throws Exception {
        connection = new XMPPTCPConnection(username, password);
        connection = connection.connect();

                (from, message, chat) -> System.out.println(from.asBareJid() + ": " + message)

        System.out.println("Logged in");

    private void handleInput(String input) throws Exception {
        String[] split = input.split(" ");
        String command = split[0];

        switch (command) {
            case "/say":
                if (split.length > 3) {
                    String recipient = split[1];
                    EntityBareJid recipientJid = JidCreate.entityBareFrom(recipient);

                    StringBuilder message = new StringBuilder();
                    for (int i=2; i<split.length; i++) message.append(split[i]);


If we now compile this code and execute it using credentials of an existing account, we can already log in and start chatting with others using the /say command (eg. /say Hi Bob!). But our communications are unencrypted right now (aside from tls transport encryption). Lets change that next. We want to use OMEMO encryption to secure our messages, so we utilize Smacks new OmemoManager which handles OMEMO encryption. For that purpose, we need a new private variable which will hold our OmemoManager. Also we make some changes to the constructor.

private OmemoManager omemoManager;

public Messenger(String username, String password) throws Exception {
    connection = new XMPPTCPConnection(username, password);
    connection = connection.connect();

    //additions begin here
    //path where keys get stored
    OmemoConfiguration.setFileBasedOmemoStoreDefaultPath(new File("path"));
    omemoManager = OmemoManager.getInstanceFor(connection);

    //Listener for incoming OMEMO messages
    omemoManager.addOmemoMessageListener(new OmemoMessageListener() {
        public void onOmemoMessageReceived(String decryptedBody, Message encryptedMessage,
                        Message wrappingMessage, OmemoMessageInformation omemoInformation) {
            System.out.println("(O) " + encryptedMessage.getFrom() + ": " + decryptedBody);

        public void onOmemoKeyTransportReceived(CipherAndAuthTag cipherAndAuthTag, Message message,
                        Message wrappingMessage, OmemoMessageInformation omemoInformation) {
            //Not needed

            (from, message, chat) -> System.out.println(from.asBareJid() + ": " + message)
    //additions end here.
    System.out.println("Logged in");

Also we must add two new commands that are needed to control OMEMO. /omemo is similar to /say, but will encrypt the message via OMEMO. /trust is used to trust an identity. Before you can send a message, you have to decide, whether you want to trust or distrust an identity. When you call the trust command, the client will present you with a fingerprint which you have to compare with your chat patner. Only if the fingerprint matches, you should trust it. We add the following two cases to the handleInput’s switch case environment:

case "/omemo":
    if (split.length > 2) {
        String recipient = split[1];
        EntityBareJid recipientJid = JidCreate.entityBareFrom(recipient);

        StringBuilder message = new StringBuilder();
        for (int i=2; i<split.length; i++) message.append(split[i]);

        Message encrypted = null;
        try {
            encrypted = OmemoManager.getInstanceFor(connection).encrypt(recipientJid, message.toString());
        // In case of undecided devices
        catch (UndecidedOmemoIdentityException e) {
            System.out.println("Undecided Identities: ");
            for (OmemoDevice device : e.getUntrustedDevices()) {
        //In case we cannot establish session with some devices
        catch (CannotEstablishOmemoSessionException e) {
            encrypted = omemoManager.encryptForExistingSessions(e, message.toString());

        if (encrypted != null) {

case "/trust":
    if (split.length == 2) {
        BareJid contact = JidCreate.bareFrom(split[1]);
        HashMap<OmemoDevice, OmemoFingerprint> fingerprints =

        //Let user decide
        for (OmemoDevice d : fingerprints.keySet()) {
            System.out.println("Trust (1), or distrust (2)?");
            int decision = Integer.parseInt(scanner.nextLine());
            if (decision == 1) {
               omemoManager.trustOmemoIdentity(d, fingerprints.get(d));
            } else {
                omemoManager.distrustOmemoIdentity(d, fingerprints.get(d));

Now we can trust contact OMEMO identities using /trust and send them encrypted messages using /omemo Hi Bob!. When we receive OMEMO messages, they are indicated by a “(O)” in front of the sender.
If we want to go really fancy, we can let our messenger display, whether received messages are encrypted using a trusted key. Unfortunately, there is no convenience method for this available yet, so we have to do a small dirty workaround. We modify the onOmemoMessageReceived method of the OmemoMessageListener like this:

public void onOmemoMessageReceived(String decryptedBody, Message encryptedMessage,
            Message wrappingMessage, OmemoMessageInformation omemoInformation) {
    //Get identityKey of sender
    IdentityKey senderKey = (IdentityKey) omemoInformation.getSenderIdentityKey().getIdentityKey();
    OmemoService<?,IdentityKey,?,?,?,?,?,?,?> service = (OmemoService<?,IdentityKey,?,?,?,?,?,?,?>) OmemoService.getInstance();

    //get the fingerprint of the key
    OmemoFingerprint fingerprint = service.getOmemoStoreBackend().keyUtil().getFingerprint(senderKey);
    //Lookup trust status
    boolean trusted = omemoManager.isTrustedOmemoIdentity(omemoInformation.getSenderDevice(), fingerprint);

    System.out.println("(O) " + (trusted ? "T" : "D") + " " + encryptedMessage.getFrom() + ": " + decryptedBody);

Now when we receive a message from a trusted identity, there will be a “T” before the message, otherwise there is a “D”.
I hope I could give a brief introduction on how to use Smacks OMEMO support. You now have a basic chat client, that is capable of exchanging multi-end-to-multi-end encrypted messages with other XMPP clients that support OMEMO. All took less than 200 lines of code! Now its up to you to add additional features like support for message carbons, offline messages and co. Spoiler: Its not hard at all :)
You can find the source code of this tutorial in the FSFE’s git repository.

When the government is unable or simply not willing to preserve your privacy, you’ll have to do it yourself.

Happy Hacking :)

Croissants, Qatar and a Food Computer Meetup in Zurich - fsfe | 19:53, Wednesday, 14 June 2017

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

Technoshamanism and Wasted Ontologies

agger's Free Software blog | 14:11, Wednesday, 14 June 2017

Interview with Fabiane M. Borges published on May 21, 20171

By Bia Martins and Reynaldo Carvalho – translated by Carsten Agger

Fabiane M. Borges, writer and researcher

Fabiane M. Borges, writer and researcher

Also available in PDF format

In a state of permanent warfare and fierce disputes over visions of the future, technoshamanism emerges as a resistance and as an endeavour to influence contemporary thinking, technological production, scientific questions, and everyday practices. This is how the Brazilian Ph.D. in clinical psychology, researcher and essayist Fabiane M. Borges presents this international network of collaboration which unites academics, activists, indigenous people and many more people who are interested in a search for ideas and practices which go beyond the instrumental logic of capital. In this interview with Em Rede, she elaborates her reflections on technoshamanism as platform for producing knowledge and indicates some of the experiences that were made in this context.

At first, technology and shamanism seem like contradictory notions or at least difficult to combine. The first refers to the instrumental rationalism that underlies an unstoppable developmentalist project. The second makes you think of indigenous worldviews, healing rituals and altered states of consciousness. What is the result of this combination?

In a text that I wrote for the magazine Geni2 in 2015, I said this: that techno + shamanism has three quite evident meanings:

  1. The technology of shamanism (shamanism seen as a technology for the production of knowledge);
  2. The shamanism of technology (the pursuit of shamanic powers through the use of technology);
  3. The combination of these two fields of knowledge historically obstructed by the Church and later by science, especially in the transition from the Middle Ages to the Renaissance.

Each of these meanings unfolds into many others, but here is an attempt to discuss each one:

1) When we perceive shamanism not as tribal religions or as the beliefs of archaic people (as is still very common) but as a technology of knowledge production, we radically change the perception of its meaning. The studies of e.g. ayahuasca show that intensified states of consciousness produce a kind of experience which reshapes the state of the body, broadening the spectrum of sensation, affection, and perception. These “plants of power” are probably that which brings us closest to the “magical thinking” of native communities and consequently to the shamanic consciousness – that is, to that alternative ontology, as Eduardo Viveiros de Castro alerts us when he refers to the Amerindian ontology in his book Cannibal Metaphysics3, or Davi Kopenawa with his shamanic education with yakoana, as described in The Falling Sky4. It is obviously not only through plants of power that we can access this ontology, but they are a portal which draws us singularly near this way of seeing the world, life itself. Here, we should consider the hypotheses of Jeremy Narby in his The Cosmic Serpent: DNA and origins of knowledge where he explains that the indigenous knowledge of herbs, roots and medicine arises partly from dreams and from the effects of entheogens.

When I say that shamanism is a technology of knowledge production, it is because it has its own methods for constructing narratives, mythologies, medicine and healing as well as for collecting data and creating artifacts and modes of existence, among other things. So this is neither ancient history nor obsolete – it lives on, pervading our technological and mass media controlled societies and becoming gradually more appreciated, especially since the 1960s where ecological movements, contact with traditional communities and ways of life as well as with psychoactive substances all became popular, sometimes because of the struggles of these communities and sometimes because of an increased interest in mainstream society. A question arose: If we were to recuperate these wasted ontologies with the help of these surviving communities and of our own ruins of narratives and experiences, would we not be broadening the spectrum of technology itself to other issues and questions?

2) The shamanism of technology. It is said that such theories as parallel universes, string theory and quantum physics, among others, bring us closer to the shamanic ontology than to the theological/capitalist ontology which guides current technological production. But although this current technology is geared towards war, pervasive control and towards over-exploitation of human, terrestrial and extra-terrestrial resources, we still possess a speculative, curious and procedural technology which seeks to construct hypotheses and open interpretations which are not necessarily committed to the logic of capital (this is the meaning of the free software, DIY and open source movements in the late 20th and early 21st century).

We are very interested in this speculative technology, since in some ways it represents a link to the lost ancestral knowledge. This leads us directly to point 3) which is the conjunction of technology with shamanism. And here I am thinking of an archeology or anarcheology, since in the search for a historical connection between the two, many things may also be freely invented (hyperstition). As I have explained in other texts, such as the Seminal Thoughts for a Possible Technoshamanism or Ancestrofuturism – Free Cosmogony – Rituals DIY, there was a Catholic theological effort against these ancestral knowledges, a historical inhibition that became more evident during the transition from the Middle Ages to the Renaissance with its inquisitions, bonfires, prisons, torture and demands for retraction. The technology which was originally a part of popular tradition and needs passed through a purification, a monotheist Christian refinement, and adhered to these precepts in order to survive.

In his book La comunidad de los espectros5, Fabián Ludueña Romandini discusses this link between science and Catholicism, culminating in a science that was structurally oriented towards becoming God, hence its tendency to omnipresence, omnipotence and omniscience. Its link to capital is widely discussed by Silvia Federici in her book Caliban and the Witch6, who states that the massacre against witches, healers, sorcerers, heretics and all who did not conform to the precepts of the church was performed in order to clear the way for the introduction of industrial society and capitalism. So two things must be taken into account here: first, that there has been a violent decimation of ancestral knowledge throughout Europe and its colonial extensions and secondly, that the relationship between science/technology and the wasted ontologies was sundered in favor of a Christian theological metaphysics.

Faced with this, techno + shamanism is an articulation which tries to consider this historical trauma, these lost yet not annihilated leftovers, and to recover (and reinvent) points of connection between technology and wasted ontologies, which in our case we call shamanism since it represents something preceding the construction of the monotheisms and because it is more connected to the processes of planet Earth, at least according to the readings that interest us. But there are several other networks and groups that use similar terms and allow other readings such as techno + magic, cyber + spirituality, techno + animism and gnoise (gnosis + noise), among others, all talking about more or less the same issues.

The result of this mixture is improbable. It functions as a resistance, an awakening, an attempt to influence contemporary thinking, technological practices, scientific questions as well as everyday practices. These are tension vectors that drive a change in the modes of existence and of relation to the Earth and the Cosmos, applied to the point where people are currently, causing them to associate with other communities with similar aspirations or desiring to expand their knowledge. These changes are gradually taking shape, whether with clay or silicium technology. But the thing is crazy, the process is slow and the enemy is enormous. Given the current level of political contention that we are currently experiencing in Brazil, associations and partnerships with traditional communities, be they indigenous, afro-Brazilian, Roma, aboriginal or activist settlements (the MST7 and its mystique), seems to make perfect sense. It is a political renewal mixed with ancestorfuturist worldviews.

You’ve pointed out that conceptually technoshamanism functions as a utopian, dystopian and entropic network of collaboration. What does this mean in practice?

Fundamentally, we find ourselves in a state of constant war, a fierce dispute between different visions of the future, between social and political ontologies and between nature and technology. In this sense, technoshamanism manifests itself as yet another contemporary network which tries to analyze, position itself with respect to and intervene in this context. It is configured as a utopian network because it harbors visionary germs of liberty, autonomy, equality of gender, ethnicity, class and people and of balance between the environment and society that have hitherto characterized revolutionary movements. It is dystopian because at the same time it includes a nihilistic and depressive vision which sees no way out of capitalism, is disillusioned by neoliberalism and feels itself trapped by the project of total, global control launched by the world’s owners. It sees a nebulous future without freedom, with all of nature destroyed, more competition and poverty, privation and social oppression. And it is entropic because it inhabits this paradoxical set of forces and maintains an improbable noise – its perpetual noisecracy, its state of disorganization and insecurity is continuous and is constantly recombining itself. Its improbability is its dynamism. It is within this regime of utopia, dystopia and entropy that it promotes its ideas and practices, which are sometimes convergent and sometimes divergent.

In practice, this manifests itself in individual and collective projects, be they virtual or face-to-face and in the tendencies that are generated from these. Nobody is a network, people are in it from time to time according to necessities, desires, possibilities, etc.

This network’s meetings take place in different countries, mainly in South America and Europe. Can you give some examples of experiences and knowledge which were transferred between these territories?

Some examples: Tech people who come from the European countries to the tecnoshamanism festivals and return doing permaculture and uniting with groups in their own countries in order to create collective rituals very close to the indigenous ones or collective mobilization for construction, inspired by the indigenous mutirão. Installation of agroforestry in a basically extractivist indigenous territory organized by foreigners or non-indigenous Brazilians working together with indigenous people. The implementation of an intranet system (peer-to-peer network) within indigenous territory (Baobáxia). Confluence of various types of healing practices in healing tents created during encounters and festivals, ranging from indigenous to oriental practices, from afro-Brazilian to electronic rituals, from Buddhist meditation to the herb bath of Brazilian healers, all of this creating generative spontaneous states where knowledge is exhanged and is subsequently transferred to different places or countries. Indigenous and non-indigenous bioconstructor’s knowledge of adobe, converging in collective construction work in MST’s squatted lands (this project is for the next steps). Artistic media practices, performance, live cinema, projection, music, and so on, that are passed on to groups that know nothing about this. In the end, technoshamanism is an immersive and experiential platform for exchanging knowledge. All of this is very much derived from the experiences of other networks and movements such as tactical media, digital liberty, homeless movements, submediology, metareciclagem, LGBTQ, Bricolabs, and many others. In the technoshamanism book, published in 2016, there are several practices that can serve as a reference.

Technoshamanism arose from networks linked to collaborative movements such as Free Software and Do It Yourself with the same demands for freedom and autonomy in relation to science and technology. To what extent has it proposed new interventions or new kinds of production in these fields? Can you give an example?

First is important to say that these movements of free software and DIY have changed. They have been mixed up with the neoliberal program, whether we’re talking about corporate software or about the makers, even though both movements remain active and are still spaces of invention. In the encounters and festivals, we are going as far is possible, considers our precarious nature, lack of dedicated funding or support from economically stronger institutions, we rely mainly on the knowledge of the participants of the network, which come into action in the places. I also know of cases where the festivals inspired the formation of groups of people who returned to their cities and continued to do work related to technological issues, whether in the countryside, in computer technology, and in art as well. Technoshamanism serves to inspire and perhaps empower projects that already function, but which technoshamanism endorses and excites.

I think that a fairly representative example is the agroforest, the Baobáxia system and the web radio Aratu that we implemented with the Pataxó in the Pará village. It is an exhange and simultanously a resistance that points to the question of collaboration and autonomy, remembering that all the processes of this planet are interdependent and that autonomy is really a path, an ideal which only works pragmatically and to the extent that it’s possible to practice it. So we’re crawling in that direction. There are networks and processes much more advanced.

What we’d like to see is the Pataxó village Pará (home of the II International Festival of Technoshamanism), to take one example, with food autonomy and exuberant agroforests and wellsprings, with media and technological autonomy and very soon with autonomous energy. We’d like to see that not just for the Pataxó, but for all the groups in the network (at least). But that depends a lot on time, investment and financing, because these things may seem cheap, but they aren’t. We should remember that corporations, entrepeneurs and land-owners are concentrating their forces on these indigenous villages and encouraging projects that go totally against all of this, that is, applying pressure in order to take their land, incorporate them in the corporate productive system and turn them into low-paid workers, etc.

In May 2017 we met with the Terra Vista Settlement in Arataca (Bahia, Brazil). They invited the leaders of the Pataxó village to become part of the Web of Peoples8 which has this exact project of technological and alimentary autonomy and I see this as a kind of continuation of the proposals which were generated in community meetings in the Pará village during the preparations for the II International Festival of Technoshamanism. Everything depends on an insistent and frequent change in the more structural strata of desire. And when we understand that TV channels like the Globo network reach all these territories, we see the necessity of opening other channels of information and education.

Do you believe that insurgent knowledge and anti-hegemonic epistemologies should gradually take up more space in the universities or is it better for them to remain in the margin?

Fabiane M. BorgesIn a conversation with Joelson, leader of the MST in the Terra Vista settlement he gave the following hint, which was decisive for me: “Technoshamanism is neither the beginning nor the end, it is a medium.” His suggestion is that as a medium, technoshamanism possesses a space of articulation, which rather than answering questions of genesis and purpose functions as a space of interlocution, for making connections, uniting focal points, leveraging movements, expanding concepts and practices concerning itself and other movements – that is, it plays in the middle of the field and facilitates processes.

As yet another network in the “middle”, it negotiates sometimes within institutions and sometimes outside them, sometimes inside academia and sometimes outside it. Since it consists of people from the most diverse areas, it manifests itself in the day to day life of its members. Some work in academia, some in healing, others in a pizzaria. That is, the network is everywhere where its participants are. I particularly like it when we do the festivals autonomously, deciding what to do and how to do it with the people who invite us and we don’t have to do favours or do anything in return for the institutions. But this is not to say that it will always be like that. In fact, the expenses of those who organize the meetings are large and unsustainable. Sometimes the network will be more independent, sometimes more dependent. What it can’t do is stagnate because of the lack of possibilities. Crowdfunding has been an interesting way out, but it’s not enough. It’s necessary sometimes to form partnerships with organizations such as universities so the thing can continue moving in a more consistent and prolonged form, because it’s difficult to rely on people’s good will alone – projects stagnate because they lack the ressources.


4 Davi Kopenawa and Bruce Albert, The Falling Sky, Belknap Press (2013).

5 Fabián Ludueña, La comunidad de los espectros: Antropotecnia, Mino y Davila (2010).

6 Silvia Federici, Caliban and the Witch: Women, the Body and Primitive Accumulation. Brooklyn, NY: Autonomedia (2004). Available here:

7 MST, the “landless worker’s movement” is a social movement in Brazil that fights for workers’ access to land through demands for land reform and direct actions such as establishing settlements on occupied land.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Max's weblog  English —  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Marcus's Blog  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog