Planet Fellowship (en)

Thursday, 28 July 2016

kvm virtualization on a liberated X200, part 1

Elena ``of Valhalla'' | 16:08, Thursday, 28 July 2016

kvm virtualization on a liberated X200, part 1

As the libreboot website warns there are issues with virtualization on x200 without microcode updated.

Virtualization is something that I use, and I have a number of VMs on that laptop, managed with libvirt; since it has microcode version 1067a, I decided to try and see if I was being lucky and virtualization was working anyway.

The result is that the machines no longer start: the kernel loads, and then it crashes and reboots. I don't remember why, however, I tried to start a debian installer CD (iso) I had around, and that one worked.

So, I decided to investigate a bit more: apparently a new installation done from that iso (<key>debian-8.3.0-amd64-i386-netinst.iso</key>) boots and works with no problem, while my (older, I suspect) installations don't. I tried to boot one of the older VMs with that image in recovery mode, tried to chroot in the original root and got <key>failed to run command '/bin/bash': Exec format error</key>.

Since that shell was lacking even the file command, I tried then to start a live image, and choose the lightweight <key>debian-live-8.0.0-amd64-standard.iso</key>: that one didn't start in the same way as the existing images.

Another try with <key>debian-live-8.5.0-i386-lxde-desktop.iso</key> confirmed that apparently Debian > 8.3 works, Debian 8.0 doesn't (I don't have ISOs for versions 8.1 and 8.2 to bisect properly the issue).

I've skimmed the release notes for 8.3 and noticed that there was an update in the <key>intel-microcode</key> package, but AFAIK the installer doesn't have anything from non-free, and I'm sure that non-free wasn't enabled on the VMs.

My next attempt (thanks tosky on #debian-it for suggesting this obvious solution that I was missing :) ) was to run one of the VMs with plain qemu instead of kvm and bring it up-to-date: the upgrade was successful and included the packages in this screenshot, but on reboot it's still not working as before.


Right now, I think I will just recreate from scratch the images I need, but when I'll have time I'd like to investigate the issue a bit more, so hopefully there will be a part 2 to this article.

Wednesday, 27 July 2016

Retroactively replacing git subtree with git submodule

emergency exit | 13:38, Wednesday, 27 July 2016

Combining multiple git repositories today is rather common, although the means of doing so are far from perfect. Usually people use git submodule or git subtree. If you have used neither or are happy with either method this post is completely irrelevant to you.

But maybe you decided to use git subtree and like me are rather unhappy with the choice. I will not discuss the upsides and downsides of either approach, I assume you want to change this, and you wish you could go back in time and make it right for all subsequent development. So this is what we are going to do :)

This was an exercise in git for me so I am sure more experienced people might know shortcuts, but since it took me quite some time to figure out right, I hope it is going to be helpful to others. It is however strongly recommended to read the shell scripts and possibly adapt them to your situation. I will not be held responsible for any problems!

It should be noted that by definition any changes in your git history will break current clones and forks of your repository so only do this if you are very sure what you are doing. And be nice and inform your users about this ASAP.

Pre conditions

  • you have a git subtree, e.g. of a library, inside your repo and you want to replace it with git submodule
  • going back to a commit in your history you should still get the same version of the library that you had previously included with git subtree
  • i.e. every subtree update should be replaced with a submodule update of the same contents and also timestamp (because this is timetravel, right?)
  • all tags should be preserved / rewritten to their corresponding new IDs
  • when updating your git subtree previously you used the “squash” feature (if you didn’t do this, it is going to be a lot harder)
  • you have a mostly straight branch history and are ok with loosing merge commits; all branches other than master will be rebased on master

When I say “preserve timestamp” it is important to clarify what this means: git has two notions of “timestamp”, the “author date” and the “committer date”. The author date is the date where the commit was actually created. It is apparently entirely without relevance for git branches and their structure. The committer date on the other hand is the time where the commit was included in the branch. This is often the same, but when doing operations on a branch like cherry-picking or rebasing then it is overwritten with the current date. Unfortunately git interfaces like github don’t use either date consistently so if you want a consistent appearance you need to also preserve the committer dates. Editing these however can have adverse effects potentially breaking branches or tags, because the correct chronological order of committer dates is important. We will take extra precautions below, but if you branches somehow get mangeled, or a commit appears out of place with gitg than you should double check the committer dates.


Make a local clone of the repository and after that remove all your “remotes” with git remote remove . This ensures that you don’t accidentally push any changes…

I assume that you have checked out the master branch and that you have exported the following environment variables:

CLONE="~/devel/myapp"                      # the directory of your clone
SUBDIR="include/mylib"                     # the subdir of the subtree/submodule relative to CLONE
SUBNAME="mylib"                            # name of the submodule (can be anything)
SUBREPO="git://" # submodules' repo

ATTENTION: don’t screw up any of the above paths since we might be calling rm -rf on them.

If you are uncomfortable with doing search and replace operations on the command line, you can set your editor to something easy like kwrite:

export EDITOR=kwrite

Replacing the subtree and subtree updates

Since we are going to delete all the merge commits and also the commits that represent changes to the subtree, we need to remember at which places we later re-insert commits. To do this, run the following:

git log --format='%at:::%an:::%ae:::%s' --no-merges | awk -F ':::' '
(PRINT == 1) && !($4 ~ /^Squashed/) {
    printf    $1 ":::" $2    ":::" $3    ":::" $4   ":::"  # commit that we work on
    printf tTIME ":::" tNAME ":::" tMAIL ":::" tREF "\n"   # commit that we insert
(PRINT != 1) && ($4 ~ /^Squashed/) {
    tREF=substr($0, length($0) - 6, length($0)) # cut commit id from subj
}; ' > /tmp/refinserts

What it does is find for every commit that starts with “Squashed” the subsequent commit’s subject and associate with it time of the subtree update and the subtree’s commitID, seperated with “:::“. We are paring this information with the subject line and not the commit ID, because the commit IDs in our branch are going to change! It also collapses subsequent updates to one. NOTE that if you have other commits that start wit “Squashed” in there subject line but don’t belong to the subtree, instead filter for Squashed '${SUBDIR}' (beware of the quotes!!).

Now create a little helper script, e.g. as /tmp/rebasehelper:


    TIME_NAME_MAIL_SUBJ=$(git log --format='%at:::%an:::%ae:::%s:::' -1)
    # next commit time is last commit time if not overwritten
    export GIT_AUTHOR_DATE=$(git log --format='%at' -1)

cp /tmp/refinserts /tmp/refinserts.actual

while $(grep -q -F "${TIME_NAME_MAIL_SUBJ}" /tmp/refinserts.actual); do

    LINE=$(grep -F "${TIME_NAME_MAIL_SUBJ}" /tmp/refinserts.actual)
    export  GIT_AUTHOR_DATE=$(echo $LINE | awk -F ':::' '{ print $5 }')
    export  GIT_AUTHOR_NAME=$(echo $LINE | awk -F ':::' '{ print $6 }')
    export GIT_AUTHOR_EMAIL=$(echo $LINE | awk -F ':::' '{ print $7 }')
                        REF=$(echo $LINE | awk -F ':::' '{ print $8 }')

    if [ ! -d "${SUBDIR}" ]; then
        echo "** First commit with submodule, initializing..."
        git submodule --quiet add --force --name ${SUBNAME} ${SUBREPO} ${SUBDIR} > /tmp/rebasehelper.log
        [ $? -ne 0 ] && echo "** failed:" && cat /tmp/rebasehelper.log && break

        echo "** done."

    echo "** Updating submodule..."
    cd "${SUBDIR}"
    git checkout --quiet $REF > /tmp/rebasehelper.log
    [ $? -ne 0 ] && echo "** failed:" && cat /tmp/rebasehelper.log && break

    echo "** Committing changes..."
    cd ${CLONE}
    git commit --quiet -am "[${SUBNAME}] update to $REF" > /tmp/rebasehelper.log
    [ $? -ne 0 ] && echo "** failed:" && cat /tmp/rebasehelper.log && break

    echo "** Continuing rebase."
    rm /tmp/rebasehelper.log
    grep -v -F "${TIME_NAME_MAIL_SUBJ}" /tmp/refinserts.actual > /tmp/
    mv /tmp/ /tmp/refinserts.actual
    git rebase --continue

if [ -d "${CLONE}/.git/rebase-merge" ] || [ -d "${CLONE}/.git/rebase-apply" ]; then
    echo "The current rebase step is not related to subtree-submodule operation or needs manual resolution."
    echo "Try 'git mergetool', followed by 'git rebase --continue' or just the latter."

We will call this later.


Now we actually remove references to the subtree from our history so that future operations create no conflicts:

git filter-branch --tree-filter 'rm -rf '"${SUBDIR}" HEAD

This may take some time. Other sources recommend --index-filter but that will not work because the file-references in the subtree are not relative to our repository, but to the SUBDIR. If this command doesn’t actually remove the directory, make sure to run rm -rf "${SUBDIR}".


Now we do the rebase:

git rebase --interactive $(git log --format='%H' | tail -n 1)

Which will open your commit history in the EDITOR that you configured. NOTE that this is in chronological order, not reverse chronological like the git log command.

In the editor you now want to remove all lines that contain “Squashed” and mark the previous commit for being edited. This is multiline-regex with substitution, but in kwrite this is very straightforward:

Search:  pick(.*\n).*Squashed.*\n
Replace: e \1

This will only miss subsequent “double” updates which can safely be removed:

Search:  .*Squashed.*\n

Save and close the editor, you will now be at the first commit that you are editing. This is the original point in time where you added your subtree, source the helper script with . /tmp/rebasehelper. If there were never any conflicts in your tree the script should run through completely and you are done. It is important to source the script with a leading . because it sets some environment variables also for your manual commits.

However, if you did have conflicts, you will be interrupted to resolve these manually, usually by git mergetool and then git rebase --continue. Whenever the rebase tells you “Stopped at…”, just call . /tmp/rebasehelper again and keep repeating the last steps, until the rebase is finished. If you are doing the whole thing on multiple branches you might want to run git rerere before every `git mergetool`, it might save you some merge steps.

Unfortunately the rebase will set all commit dates to the current date (although the “author date” is preserved). There is a an option that prevents exactly this, but which cannot be used together with interactivity, because the interactivity introduces commits with a newer author date (which is okay in itself but committer dates need to be chronological for rebase). We write this little scriplet to /tmp/redate to help us out:

git filter-branch --force --env-filter '
    LAST_DATE=$(cat /tmp/redate_old);
    GIT_COMMITTER_DATE=$( (echo $LAST_DATE; echo $GIT_AUTHOR_DATE) | awk '"'"'substr($1, 2) > MAX { MAX = substr($1, 2); LINE=$0}; END { print LINE }'"'"');
    echo $GIT_COMMITTER_DATE > /tmp/redate_old;
    export GIT_COMMITTER_DATE' $@

What it does is set the COMMITTER_DATE to the AUTHOR_DATE unless the COMMITTER_DATE would be older than the last COMMITTER_DATE (which is illegal) — in that case it uses the same COMMITTER_DATE as the previous commit. This ensures correct chronology while still having sensible dates in all cases.

And then we call the script with some pre and post commands:

echo '@0 +0000' > /tmp/redate_old
git log --format='%ad' --date=raw -1 | awk '{ print "@" $1+1 " " $2 }' > /tmp/redate_master

(initialization and saving master’s final timestamp+1 for later)

Multiple Branches

For all other affected branches it is now much simpler. First checkout the branch. If your subtree/submodule is large, you might have to delete ${SUBDIR} before that with rm -rf.

Then filter the tree as described above, you might need to add force to overwrite some backups:

git filter-branch -f --tree-filter 'rm -rf '"${SUBDIR}" HEAD

Followed by a rebase on the already-fixed master branch:

git rebase --interactive master

You should now see all diverging commits, plus all the “Squashed*” commits. If you did make subtree updates on other branches and you want to retain them, then apply the same mechanism as for the master branch. If subtree changes in the other branch are nor important, you can just remove all of them from the file.

Now the committer dates in the part of the branch that are identical to master are correct (because we fixed those earlier), but those in the new part have updated commit times. We can just use our previous scriptlet, but this time we initialize with the master’s last timestamp and we only operate on the commits that are actually new:

cp /tmp/redate_master /tmp/redate_old
/tmp/redate_master master..HEAD

voila, and repeat for the other branches!

Rewiring the tags

Currently all your tags should still be there and also still be valid. Print them to a tmpfile and open them:

git tag -l > /tmp/oldtags
$EDITOR /tmp/oldtags

Review the list of tags and remove all tags that belonged to the submodule or that you don’t want to keep in the new repository from the file.

Now checkout master again, then run this nifty script (reed the note below first!):



TAGS=$(cat /tmp/oldtags)

for TAG_NAME in $TAGS; do
    TAG_COMMIT=$(git for-each-ref --format="%(objectname)" refs/tags/${TAG_NAME})
    TAG_MESSAGE=$(git for-each-ref --format="%(contents)" refs/tags/${TAG_NAME})

    # the commit that is referenced by the annotated tag in the original branch
    ORIGINAL_COMMIT=$(git log --format='%H' --no-walk ${TAG_COMMIT})
    ORIGINAL_COMMIT_DATE=$(git log --format='%at' --no-walk ${ORIGINAL_COMMIT})
    ORIGINAL_COMMIT_SUBJECT=$(git log --format='%s' --no-walk ${ORIGINAL_COMMIT})
    # the same commit in our rewritten branch
    NEW_COMMIT=$(git log --format='%H:::%at:::%s' | awk -F ':::' '
    $2 == "'"$ORIGINAL_COMMIT_DATE"'" && $3 == "'"$ORIGINAL_COMMIT_SUBJECT"'" { print $1 }')

    # overwrite git environment variables directly:
    export GIT_COMMITTER_DATE=$(git for-each-ref --format="%(taggerdate:rfc)" refs/tags/${TAG_NAME})
    export GIT_COMMITTER_NAME=$(git for-each-ref --format="%(taggername)" refs/tags/${TAG_NAME})
    export GIT_COMMITTER_EMAIL=$(git for-each-ref --format="%(taggeremail)" refs/tags/${TAG_NAME})

     # add the new tag


NOTE that for this to work, all tags must be contained in the master branch (or whichever branch you are currently on. If this is not the case, you need to create individual files with the tags of each branch and run this script repeatedly on the specific file while being on the corresponding branch.

NOTE2: This does assume that the COMMITTER_DATE of the tag will fit in at the place it is being added. I am not sure if there are edge cases where one would have to double-check the committer date similarly to how we do it above…

Since your tags are all fixed, now would be a good time to checkout some of them and verify that your software builds, passes its unit checks et cetera. Remember that in contrast to git subtree you need to manually reset your submodule via submodule update after you checkout the tag if you actually want the submodule’s revision that belonged to the tag (which is the whole point of our exercise).


Ok, so the new branches and tags are in place, but the folder is even bigger than before :o Now we get to clean up.

First make sure that absolutely nothing references the old stuff, i.e. delete all tags that you did not change in the previous step with git tag -d. Also make sure that you have no remotes set. If in doubt, open a git-gui like gitk or gitg with the --all parameter and confirm that only your new trees are listed.

Then perform the actual clean-up:

git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d
git reflog expire --expire=now --all
git gc --prune=now --aggressive

To double-check call something like du -c . in your directory and look at the output. You should now see that the big directories are all related to the submodule.

Pushing the changes

Finally we will publish the changes. This is the part that is irreversible. You can take extra pre-cautions by forking your upstream and pushing to the fork first, e.g. if your repository is at make a fork to or something like that.

To work on alicia:

git remote add alicia

In any case you need to delete all tags that are not in your local repository anymore. If you decided to get rid of some branches locally, also remove them on the remote. You can do this via the command line or github’s interface (or gitlab or whatever). The command line for removing a remote tag is:

git push alicia :tagname

Also backup your releases, i.e. save the release messages and any downloads you added somewhere (I don’t have an automatic way for that).

Then force push, including the updated tags:

git push --force --tags alicia master

Repeat the last step for every branch and look at your results! Do a fresh clone of the remote somewhere to verify that everything is right. Github should have rewired your releases, to the updated tags, but if something went wrong, you can fix it through the web interface.

That was easy, right? :-)

If you have any ideas how to simplify this, please feel free to comment (FSFE account required) or reply on Twitter.

Sunday, 24 July 2016

One Liberated Laptop

Elena ``of Valhalla'' | 18:35, Sunday, 24 July 2016

One Liberated Laptop


After many days of failed attempts, yesterday @Diego Roversi finally managed to setup SPI on the BeagleBone White¹, and that means that today at our home it was Laptop Liberation Day!

We took the spare X200, opened it, found the point we were on in the tutorial installing libreboot on x200, connected all of the proper cables on the clip³ and did some reading tests of the original bios.


While the tutorial mentioned a very conservative setting (512kHz), just for fun we tried to read it at different speed and all results up to 16384 kHz were equal, with the first failure at 32784 kHz, so we settled on using 8192 kHz.

Then it was time to customize our libreboot image with the right MAC address, and that's when we realized that the sheet of paper where we had written it down the last time had been put in a safe place… somewhere…

Luckily we also had taken a picture, and that was easier to find, so we checked the keyboard map², followed the instructions to customize the image, flashed the chip, partially reassembled the laptop, started it up and… a black screen, some fan noise and nothing else.

We tried to reflash the chip (nothing was changed), tried the us keyboard image, in case it was the better tested one (same results) and reflashed the original bios, just to check that the laptop was still working (it was).

It was lunchtime, so we stopped our attempts. As soon as we started eating, however, we realized that this laptop came with 3GB of RAM, and that surely meant "no matching pairs of RAM", so just after lunch we reflashed the first image, removed one dimm, rebooted and finally saw a gnu-hugging penguin!

We then tried booting some random live usb key we had around (failed the first time, worked the second and further one with no changes), and then proceeded to install Debian.

Running the installer required some attempts and a bit of duckduckgoing: parsing the isolinux / grub configurations from the libreboot menu didn't work, but in the end it was as easy as going to the command line and running:

linux (usb0)/install.amd/vmlinuz
initrd (usb0)/install.amd/initrd.gz

From there on, it was the usual debian installation and a well know environment, and there were no surprises. I've noticed that <key>grub-coreboot</key> is not installed (<key>grub-pc</key> is) and I want to investigate a bit, but rebooting worked out of the box with no issue.

Next step will be liberating my own X200 laptop, and then if you are around the @Gruppo Linux Como area and need a 16 pin clip let us know and we may bring everything to one of the LUG meetings⁴

¹ yes, white, and most of the instructions on the interwebz talk about the black, which is extremely similar to the white… except where it isn't

² wait? there are keyboard maps? doesn't everybody just use the us one regardless of what is printed on the keys? Do I *live* with somebody who doesn't? :D

³ the breadboard in the picture is only there for the power supply, the chip on it is a cheap SPI flash used to test SPI on the bone without risking the laptop :)

⁴ disclaimer: it worked for us. it may not work on *your* laptop. it may brick it. it may invoke a tentacled monster, it may bind your firstborn son to a life of servitude to some supernatural being. Whatever happens, it's not our fault.

Saturday, 23 July 2016

A Change of Direction

David Boddie - Updates (Full Articles) | 15:04, Saturday, 23 July 2016

My focus has shifted away from my Android explorations, at least for the time being. I'll probably keep tinkering with that, just to keep reminding myself how to generate Android applications, but there are other projects that are perhaps more worthy of attention.

Embedded Open Modular Architecture

One of these is the Earth-friendly EOMA68 Computing Devices effort on Crowd Supply. This aims to bring Libre Computing to a new audience by using a modular, sustainable approach to the hardware design while committing to using software that respects the user's freedom and privacy. I'm supporting the campaign by pledging for two desktop systems since I believe that we will only get the systems we want if we are prepared to support efforts like this. I also know that Luke, the developer behind the project, will try his very best to make it succeed.

Images of the first CPU card and its housing from the EOMA68 Crowd Supply campaign page.

Another benefit of using EOMA as the basis for this effort is the possibility that the CPU card can be reused in different machines, and that those machines can have different form factors. A lot of excitement around this project is due to the 3D-printed laptop that serves as the flagship device, but it doesn't take much imagination to realise that a swappable CPU card could be useful for more mundane situations involving workstations and desktops. If you have desktop systems set up in different places and need to continue working when you move between them, you should be able to just take the CPU card with you. I know that some employers like the idea of hot-desking (or whatever it's called now) but that usually involves transporting a heavy laptop around and plugging it into a proprietary docking station in each location. The other advantage is that you could potentially use the card in a laptop when you really need computing on the move, and otherwise plug it into a desktop when you don't.

What Will It Run?

With new hardware comes exciting possibilities. It is interesting to consider what kind of software we might run on the first, ARM-based, EOMA68 CPU card included in the Crowd Supply campaign. There are currently two flavours of GNU/Linux planned for the Allwinner A20 CPU card, Debian and Parabola, but Fedora and Devuan are also in the works. There are many other kinds of operating systems in existence and some of those are available under Free Software licenses. Perhaps some of those could also run on that hardware.

Regardless of which operating systems are bundled with the CPU cards, hopefully others will follow, taking advantage of the standard configuration of the card to fine-tune the software and make the user experience as comfortable and hassle-free as possible. The possibilities of building systems around the EOMA standard may also attract makers and users of devices other than laptops and desktops, such as handheld gaming consoles and mobile phones.

All in all, it's an exciting project to follow. Crowd funding will continue until sometime in August, so there's still time to support the campaign. Even if it's not something you would want to use yourself, chances are that there are people you know who may be interested in one or more of its aims, so be sure to let them know about the campaign.

Thursday, 21 July 2016

Transfer Public Links to Federated Shares

English – Björn Schießle's Weblog | 09:56, Thursday, 21 July 2016

Transform Public Links to Federated Shares

Transform a public link to a federated share

Creating public links and sending them to your friends is a widely used feature of Nextcloud. If the recipient of a public link also has a Nextcloud or ownCloud account he can use the “Add to your Nextcloud” button to mount the content over WebDAV to his server. On a technical level all mounted public links use the same token, the one of the public link, to reference the shared file. This means that as soon as the owner removes the public link all mounts will disappear as well. Additionally, the permissions for public links are limited compared to normal shares, public links can only be shared read-only or read-write. This was the first generation of federated sharing which we introduced back in 2014.

A year later we introduced the possibility to create federated shares directly from the share dialog. This way the owner can control all federated shares individually and use the same permission set as for internal shares. Both from a user perspective and from a technical point of view this lead to two different ways to create and to handle federated shares. With Nextcloud 10 we finally bring them together.

Improvements for the owner

Public Link Converted to a Federated Share

Public link converted to a federated share for

From Nextcloud 10 on every mounted link share will be converted to a federated share, as long as the recipient also runs Nextcloud 10 or newer. This means that the owner of the file will see all the users who mounted his public link. He can remove the share for individual users or adjust the permissions. For each share the whole set of permissions can be used like “edit”, “re-share” and in case of folder additionally “create” and “delete”. If the owner removes the original public link or if it expires all federated shares, created by the public link will still continue to work. For older installations of Nextcloud and for all ownCloud versions the server will fall-back to the old behavior.

Improvements for the user who mounts a public link

After opening a public link the user can convert a public link to a federated share by adding his Federated Cloud ID or his Nextcloud URL

After opening a public link the user can convert it to a federated share by adding his Federated Cloud ID or his Nextcloud URL

Users who receive a public link and want to mount it to their own Nextcloud have two options. They can use this feature as before and enter the URL to their Nextcloud to the “Add to your Nextcloud” field. In this case they will be re-directed to their Nextcloud, have to login and confirm the mount request. The owners Nextcloud will then send the user a federated share which he has to accept. It can happen that the user needs to refresh his browser window to see the notification.
Additionally there is a new and faster way to add a public link to your Nextcloud. Instead of entering the URL to the “Add to your Nextcloud” field you can directly enter your federated cloud ID. This way the owners Nextcloud will send the federated share directly to you and redirect you to your server. You will see a notification about the new incoming share and can accept it. Now the user also benefit from the new possibilities of the owner. The owner can give him more fine grained permissions and from the users point of view even more important, he will not lose his mount if the public link gets removed or expires.

Nextcloud 10 introduces another improvement in the federation area: If you re-share a federated share to a third server, a direct connection between the first and the third server will be created now so that the owner of the files can see and control the share. This also improves performance and the potential error rate significantly, avoiding having to go through multiple servers in between.

Wednesday, 20 July 2016

How many mobile phone accounts will be hijacked this summer? - fsfe | 17:48, Wednesday, 20 July 2016

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.

Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Friday, 15 July 2016

On carrots and sticks: 5G Manifesto

polina's blog | 12:16, Friday, 15 July 2016

In the beginning of May 2016, FSFE together with 72 organisations supported strong net neutrality rules in the joint letter addressed to the EU telecom regulators. The Body of European Regulators of Electronic Communication (BEREC) is currently negotiating the creation of guidelines to implement the recently adopted EU Regulation 2015/2120 on open internet access.

In the joint letter, we together with other civil society organisations urged BEREC and the national agencies to respect the Regulation’s goal to “ensure the continued functioning of the internet ecosystem as an engine of innovation”, respecting the Charter of Fundamental Rights of the EU.

However, on 7 July the European Commission endorsed and welcomed another point of view, presented by the 17 biggest EU Internet Service Providers (ISP) who oppose the idea of strong net neutrality rules. In the so-called “5G Manifesto”, the coalition of ISP states the following:

“we must highlight the danger of restrictive Net Neutrality rules, in the context of 5G technologies, business applications and beyond”

“The EU and Member States must reconcile the need for Open Internet with pragmatic rules that foster innovation. The current Net Neutrality guidelines, as put forward by BEREC, create significant uncertainties around 5G return on investment”


According to the coalition, the Net Neutrality guidelines are “too prescriptive” and as such do not meet the demand of the market and rapid developments within. The coalition is calling the Commission to “take a positive stance on innovation and stick to it”, by allowing network discrimination under the term “network slicing”.

EDRi, one of the leading campaigners for Net Neutrality and a co-signer of the aforementioned letter to BEREC, has strongly criticised the “5G Manifesto”, stating that it includes “absurd threat not to invest in profitable new technologies”.

The Commission is clearly not seeing the real implications of endorsing such policies on the innovation, especially in the digital sector. Furthermore, the Manifesto is indeed imposing threats on the EC: net neutrality vs fast connection – no middle ground. The Manifesto is arguing for “network slicing” justifying the discrimination for public safety services.

Existing rules on neutrality do allow traffic management in ‘special cases’: Article 3(3) of the EU Regulation 2015/2120 does not preclude internet access services from implementing reasonable traffic management measures that are transparent, non-discriminatory and proportionate, and based on objectively different technical quality of service requirements of specific categories of traffic. While Article 3(5) governs so-called specialised services (i.e. “other than internet access services”) that ISP are free to offer. It is difficult to see how these provisions might exclude public safety considerations if they’re “objectively” different from the technical quality perspective or need to be offered outside of open internet. At the same time it is easy to see why ISP would want to achieve that special status by trying to get this exception as broad as possible.

What BEREC is expected to do is to fill the gaps in the legislation by clarifying the implementation of the law, and to not create new rules. What the Commission is expected to do is to “stick” to its existing primary law, including the one on open internet access, and the protection of fundamental rights and freedoms. The latter includes the freedom to conduct business, but it does not include the right to maximise its profits at expense of others.


What do telcos promise in return? Telcos promise to invest into 5G. Such promise might be luring for the Commission, as the Commission calls it [“the most critical building block of the digital society”. The argument of net neutrality slowing down the internet is not a new one, and the 5G Manifesto might have hit the Commission’s tender spot. What is necessary to acknowledge, is that internet has been operating based on openness since its nascence and all the legislators need to do is to safeguard that openness in order to inter alia finally achieve the desired 5G. Internet won’t stop evolving because a part of service providers want to slice the cake according to their needs.

Net neutrality and open internet is not a new formula created by legislators in Brussels: it’s the basic, fundamental quality of internet that needs to be preserved to secure further development and future innovations. In conclusion, the EU will only need one “stick” to deliver carrots to everyone: to stick to support open internet for everyone.

The image is licensed under CC BY 3.0 US, Attribution: Luis Prado, from The Noun Project

Tuesday, 12 July 2016

Ανοιχτά Δεδομένα και ΟΑΣΑ

nikos.roussos - opensource | 01:51, Tuesday, 12 July 2016

OASA map

Ως τακτικός χρήστης των μέσων μαζικής μεταφοράς (ΜΜΜ) της Αθήνας επισκέπτομαι συχνά το site του ΟΑΣΑ για να δω τις διαθέσιμες πληροφορίες, ειδικά αν θέλω να χρησιμοποιήσω μια γραμμή με την οποία δεν είμαι εξοικειωμένος, όπως π.χ. τη γραμμή 227. Το link είναι απ' το Wayback Machine του Internet Archive project γιατί αυτή η δυνατότητα έχει πλέον εξαφανιστεί απ' το site του ΟΑΣΑ και στη θέση της υπάρχει παραπομπή για το Google Transit.

Το Google Transit είναι sub-project του Google Maps και η υπηρεσία προσφέρεται μεν δωρεάν, δεν παύει όμως να είναι μια εμπορική υπηρεσία μιας for-profit εταιρίας με συγκεκριμένους όρους χρήσης τόσο για την υπηρεσία όσο και για τα δεδομένα. Όπως όλες οι υπηρεσίες της Google, λειτουργεί ως πλατφόρμα διανομής διαφημίσεων.

Συνειδητά εδώ και πολλά χρόνια έχω σταματήσει να χρησιμοποιώ Google Maps και χρησιμοποιώ OpenStreetMap (και εφαρμογές που βασίζονται σ' αυτό) για πολλούς λόγους. Κυρίως για τους ίδιους λόγους που προτιμώ να διαβάσω ένα λήμμα στη Wikipedia και όχι στη Britanica. Θεωρώ συνεπώς απαράδεκτο ένας δημόσιος (ακόμα) οργανισμός να με ωθεί να χρησιμοποιήσω μια εμπορική υπηρεσία για να έχω πρόσβαση στα δεδομένα που έχω ήδη πληρώσει για να παραχθούν. Σήμερα έστειλα το παρακάτω email στον ΟΑΣΑ:


Τους τελευταίους μήνες έχουν εξαφανιστεί απ' το site σας ( οι πληροφορίες (στάσεις, δρομολόγια, χάρτες) για όλες τις γραμμές λεωφορείων και τρόλεϊ. Αντ' αυτού η σχετική σελίδα παραπέμπει σε μια εμπορική υπηρεσία (Google Transit).

Ως πολίτης θα ήθελα να μάθω:

  1. Πώς μπορώ να βρω μέσα απ' το site σας τις σχετικές πληροφορίες, χωρίς να χρειαστεί να χρησιμοποιήσω εμπορικές υπηρεσίες (Google Maps, Here Maps, κλπ);

  2. Στη σελίδα με τους όρους χρήσης αναφέρεται πως για όλα τα δεδομένα (χάρτες, σχεδιαγράμματα, γραμμές, δρομολόγια, κ.τ.λ.) δεν επιτρέπεται η εμπορική χρήση τους. Σε ποια δεδομένα αναφέρεστε αν αυτά ούτως ή άλλως δεν διατίθενται μέσα απ' το site σας;

  3. Τα δεδομένα προσφέρονται ελεύθερα μέσα απ' το με άδεια "Creative Commons: Attribution" που επιτρέπει την εμπορική χρήση. Τι απ' τα δύο ισχύει τελικά;

  4. Αν όντως δεν επιτρέπεται η εμπορική χρήση των δεδομένων, που υπάρχει αναρτημένη η συμφωνία που έχετε κάνει με τη Google και ποιο είναι το οικονομικό όφελος για τον οργανισμό;

Δεν ξέρω αν θα λάβω κάποια ουσιαστική απάντηση ή αν θα λάβω οποιαδήποτε απάντηση, αλλά το γεγονός παραμένει εξοργιστικό. Ειδικά αν αναλογιστούμε πως ο ΟΑΣΑ είχε τέτοια υπηρεσία σε λειτουργία απ' το 2011 και προτίμησε να την κλείσει, ενώ παράλληλα προωθεί μια εφαρμογή για κινητά τηλέφωνα που σε μεγάλο βαθμό υλοποιεί τις απούσες απ' το site του υπηρεσίες, αδιαφορώντας για τους πολίτες που δεν διαθέτουν smartphone.

Η άποψη μου είναι απλή. Δεδομένα και λογισμικό που παράγονται και υλοποιούνται με δημόσιο χρήμα πρέπει να είναι και δημόσιο κτήμα. Αυτό σημαίνει πως οι πολίτες δεν θα πρέπει να είναι υποχρεωμένοι να χρησιμοποιήσουν εμπορικές υπηρεσίες για να έχουν πρόσβαση σε δεδομένα δημοσίων υπηρεσιών, ούτε θα πρέπει να "περάσουν" μέσα από app stores συγκεκριμένων εταιριών για να κατεβάσουν την εφαρμογή μιας δημόσιας υπηρεσίας στο κινητό τους. Για τους ίδιους λόγους οι εφαρμογές αυτές θα πρέπει να προσφέρονται ως Ελεύθερο Λογισμικό και ο κώδικας τους να είναι ανοιχτός, καθώς δαπανήθηκε δημόσιο χρήμα.

Ο ΟΑΣΑ στη συγκεκριμένη περίπτωση καταπάτησε οποιαδήποτε έννοια "δημόσιου" αγαθού με την ευνοϊκή μεταχείριση μιας εταιρίας (Google), προσφέροντας της δωρεάν δεδομένα και διαφήμιση, και την ταυτόχρονη απαγόρευση της εμπορικής εκμετάλλευσης των δεδομένων από ανταγωνιστές της.

Σχόλια και αντιδράσεις σε Diaspora, Twitter, Facebook

Monday, 11 July 2016

EC: Free Software to enhance cybersecurity

polina's blog | 15:38, Monday, 11 July 2016

On 5 July, the European Commission signed a contractual arrangement on a public-private partnership (PPP) for cybersecurity industrial research and innovation between the European Union, and a newly-established European Cyber Security Organisation (ECSO). The latter is supposed to represent a wide variety of stakeholders such as large companies, start-ups, research centres, universities, clusters and association as well as European Member State’s local, regional and national administrations. The partnership is supposed to trigger €1.8 billion of investment by 2020 under Horizon2020 initiative, in which the EU allocates he total budget up to EUR 450 million.

In the accompanying communication, the Commission identifies the importance of collaborative efforts in the area of cybersecurity, the transparency and information sharing. In regard to information sharing, the Commission acknowledged the difficulties amongst the businesses to share information about cyberthreats with their peers or authorities in the fear of possible liability for the breach of confidentiality. In this regard the Commission intends to set up anonymous information exchange in order to facilitate such intelligence exchange.

In addition, the Commission stressed “the lack of interoperable solutions (technical standards), practices (process standards) and EU-wide mechanisms of certification” that are affecting the single market in cybersecurity. There is no doubt that such concerns can be significantly decreased by using Free Software as much as possible. The security advantages of Free Software have also amongst the others been previously recognised by the European Parliament in its own-initiative report on Digital Single Market. Therefore, within the anticipated establishment of the PPP for cybersecurity (cPPP), the Commission highlights that:

In this context, the development of open source software and open standards can help foster trust, transparency and disruptive innovation, and should therefore also be a part of the investment made in this cPPP.

The newly established ECSO, whose role is to support the cPPP, is currently calling out for members in different groups. It is currently unclear how the membership will be divided between these groups, however the stakeholders’ platform is intended to be mostly industry-led.

We hope that the Commission will in practice uphold its plans to include Free Software communities into standardisation processes as has been indicated in several documents throughout the whole Digital Single Market initiative, including but not limited to the area of cybersecurity.

Let's Encrypt torpedoes cost and maintenance issues for Free RTC - fsfe | 13:34, Monday, 11 July 2016

Many people have now heard of the EFF-backed free certificate authority Let's Encrypt. Not only is it free of charge, it has also introduced a fully automated mechanism for certificate renewals, eliminating a tedious chore that has imposed upon busy sysadmins everywhere for many years.

These two benefits - elimination of cost and elimination of annual maintenance effort - imply that server operators can now deploy certificates for far more services than they would have previously.

The TLS chapter of the RTC Quick Start Guide has been updated with details about Let's Encrypt so anybody installing SIP or XMPP can use Let's Encrypt from the outset.

For example, somebody hosting basic Drupal or Wordpress sites for family, friends and small community organizations can now offer them all full HTTPS encryption, WebRTC, SIP and XMPP without having to explain annual renewal fees or worry about losing time in their evenings and weekends renewing certificates manually.

Even people who were willing to pay for a single certificate for their main web site may have snubbed their nose at the expense and ongoing effort of having certificates for their SMTP mail server, IMAP server, VPN gateway, SIP proxy, XMPP server, WebSocket and TURN servers too. Now they can all have certificates.

Early efforts at SIP were doomed without encryption

In the early days, SIP messages would be transported across the public Internet in UDP datagrams without any encryption. SIP itself wasn't originally designed for NAT and a variety of home routers were created with "NAT helper" algorithms that would detect and modify SIP packets to try and work through NAT. Sadly, in many cases these attempts to help actually clash with each other and lead to further instability. Conversely, many rogue ISPs could easily detect and punish VoIP users by blocking their calls or even cutting their DSL line. Operating SIP over TLS, usually on the HTTPS port (TCP port 443) has been an effective way to quash all of these different issues.

While the example of SIP is one of the most extreme, it helps demonstrate the benefits of making encryption universal to ensure stability and cut out the "man-in-the-middle", regardless of whether he is trying to help or hinder the end user.

Is one certificate enough?

Modern SIP, XMPP and WebRTC require additional services, TURN servers and WebSocket servers. If they are all operated on port 443 then it is necessary to use different hostnames for each of them (e.g. and Each different hostname requires a certificate. Let's Encrypt can provide those additional certificates too, without additional cost or effort.

The future with Let's Encrypt

The initial version of the Let's Encrypt client, certbot, fully automates the workflow for people using popular web servers such as Apache and nginx. The manual or certonly modes can be used for other services but hopefully certbot will evolve to integrate with many other popular applications too.

Currently, Let's Encrypt's certbot tool issues certificates to servers running on TCP port 443 or 80. These are considered to be a privileged ports whereas any port over 1023, including the default ports used by applications such as SIP (5061), XMPP (5222, 5269) and TURN (5349), are not privileged ports. As long as certbot maintains this policy, it is generally necessary to either run a web server for the domain associated with each certificate or run the services themselves on port 443. There are other mechanisms for domain validation and various other clients supporting different subsets of them. Running the services themselves on port 443 turns out to be a good idea anyway as it ensures that RTC services can be reached through HTTP proxy servers who fail to let the HTTP CONNECT method access any other ports.

Many configuration tasks are already scripted during the installation of packages on a GNU/Linux distribution (such as Debian or Fedora) or when setting up services using cloud images (for example, in Docker or OpenStack). Due to the heavily standardized nature of Let's Encrypt and the widespread availability of the tools, many of these package installation scripts can be easily adapted to find or create Let's Encrypt certificates on the target system, ensuring every service is running with TLS protection from the minute it goes live.

If you have questions about Let's Encrypt for RTC or want to share your experiences, please come and discuss it on the Free-RTC mailing list.

EU-FOSSA needs your help – A free software community call to action

FLOSS – Creative Destruction & Me | 13:00, Monday, 11 July 2016

The EU-FOSSA project’s mission is to “offer a systematic approach for the EU institutions to ensure that widely used critical software can be trusted”. The project was triggered by recent software security vulnerabilities, especially the Heartbleed issue. An inspired initiative by EU parlamentarians Max Andersson and Julia Reda, the pilot project “Governance and quality of software code – Auditing of free and open source software” became FOSSA. Run under the auspices of DIGIT, the project promised “improved integrity and security of key open source software”. I had been interviewed as a stakeholder by the project during work package 0 (“project charter, stakeholder interviews and business case”), and later worked with the FSFE group that provided input and comments to the project to EC-DIGIT. While I believe that the parliamentary project champions and the people involved in the project at EC-DIGIT are doing great work, I am worried that the deliverables of the project are beginning to fall short of expectations. I also think the free software community needs to get more involved. Here is why.

When I was approached by the project in the early phase to be interviewed as a representative of the Open Invention Network and a European free software activist, I felt very motivated to help get the project off to a good start. Already during the initial interview doubts emerged about the approach and the apparently pre-conceived ideas of the consultants that had been tasked with the project. Essentially, it seemed they intended to audit Drupal, and wanted the European Commission to do code reviews. These doubts became stronger when the project published a survey about which programs to audit that included 7-zip as a critical free software component, and other funny choices like “Linux – selected system library” without any qualifications. Recently the project began publishing it’s deliverables, and the results gave me and others involved a pause. For example, have a look at Matthias’ comments here. The recommendations show a systematic lack of understanding of the free software/open source community process and the nature of the collaborative peer production performed by them.

Here is a highlight, a “conclusion and recommendation” from section 4.1. Project Management in deliverable 1: “FOSS communities should … use a formal methodology based on PMBOK, depending on their possibilities.” PMBOK or the “project management body of knowledge” (a fat and well-studied volume in my book shelf) is essentially the bible of waterfall project management based on the assumption of working in a large, hierarchical organisation. It is immensely useful in such environments, especially in the public sector. Just not in the wider free software community, which uses mechanisms as self-identification to distribute tasks and depend on voluntary contributions of their participants.

Here is another one, from section 4.2 Software Development Methodology: “FOSS communities should use … agile-based methodologies, according to their resources, so as to make software development more efficient”. Daily scrum anybody, and “the list” of tasks that are allowed to be worked on? Contributors to free software communities, be they individuals or companies, participate voluntarily. They especially do not take assignments or orders from a central coordinating agent, which incidentally does not exist as a role in most relevant communities. Agile methods can be extremely useful and help software teams a lot, but imposing them on a free software project is not an option. The recommendation is also questionable in its premise – that increasing efficiency in community software development is necessary. There are communities with very efficient software development processes (the Linux kernel developers, for example) and others that are not so great. However the efficiency may depend on the number of active contributors, or the complexity of the project goals, or the skill and experience level of the average contributor. Management of development efficiency is a community governance task that defies simplicistic answers like “use this method”. Those are platitudes, and taking them at face value runs the risk of damaging the community development culture.

I could point out more, but as a third example, allow me to highlight a few aspects that did not make it into the report: In deliverable 4, section 4.5 Relevant opinions and advice from interviewees, item 5 says “One possibility is for the European Institutions to do code reviews and share the results with the affected communities.” Sure, however the alternative recommended by FSFE and me and anybody I spoke to about this was for European institutions to not perform code reviews themselves because that is not their area of expertise, and instead to facilitate code reviews in the communities, in cooperation with universities and the national IT security agencies of the member states that already have similar programs in place or are working on them. The pre-conceived idea – the EC should perform the audits – shines through.

In short, the results so far are disappointing, which is sad because the idea behind FOSSA is great, and we should applaud the EU/EC project team for their work and the initiative they took. However the parties performing the research did not hear or fully understand the input and feedback provided by the community. The questionable recommendations are based on a lack of understanding that may in part be caused by tasking generalist research contractors to study the subject. Free software community management is a profession with a difficult subject matter. The fact that everybody can join and participate does not mean that the underlying social process is easy and intuitive to understand for outsiders. We don’t ask painters to recommend medical practises, either.

Because of that, I believe the EC-FOSSA project can still be fixed. The free software community needs to get closer to the project and more involved, study the deliverables and provide feedback and alternative recommendations. The project partners are encouraged to work more directly with the free software communities, and to adopt open and collaborative processes in the project similar to the once used in the communities themselfes. More direct actions to improve the process in the FOSSA project are possible. If this means for EC-DIGIT to step out of the usual procurement routine and set of suppliers of studies, so be it. After all, this is 1 million euro definitely spent for the common good.

Image credit: J. Albert Bowden II, CC-BY, unmodified.

Filed under: CreativeDestruction, English, FLOSS, KDE, Linux, Open Invention Network, OSS Tagged: Creative Destruction, FLOSS, free software communities, free software foundation europe, FSFE, Linux, Open Governance, peer production

II Technoshamanism Meeting In Berlin

agger's Free Software blog | 09:21, Monday, 11 July 2016

We are organizing the II Technoshamanism Meeting in Berlin!
The event will take place on Sunday, July 17 at TOP eV, Schillerpalais (Schillerpromenade 4 – Berlin) from 2pm to midnight.
Do you remember our first encounter in Berlin?


Once again, we are calling all the cyborgs, witches, heretics, technoshamans, programmers, hackers, artists, alchemists, thinkers and everyone who might be curious to gather on sunday  July 17, from 14:00  to midnight, in order to create discussions and practices regarding technoshamanism, thus strenghtening our networking in Berlin.
This time, it will be a little different then the first one. The event will happen in one single day and the purposed subject which will form the basis of the activities will be: II INTERNATIONAL FESTIVAL OF TECNOSHAMANISM.


In April/2014 we did the I INTERNACIONAL TECNOSHAMANISM FESTIVAL in Arraial d‘Ajuda  in the south of Bahia/Brazil, in a permaculture and agroforestry site (Itapeco), in the vicinity of the Pataxó indigenous people.
It was an incredible meeting with so many interesting projects, and with the incisive participation of indigenous group Pataxo of “Aldeia Velha”. Two years after, a different Pataxo community invited us to make the II International Festival inside of their  community: Aldeia Para, the Mother community of Pataxo people. It will happend in Caraiva, Aldeia Para in one of the most beautiful and rustic villages in southern Bahia.
To see some videos of the indigenous pataxos talking about the festival, here:
We want to take the opportunity of our stay in Berlin to make the II meeting to in order to strengthen the network and introduce the theme of the II Festival, that is “resistance and networks in the anthropocene”. We will follow the principles of technoshamanism, which aims to connect new technologies with ancestral ones in order to repair the historical division between the two kinds of knowledge. Technoshamanism intends to create new inputs for  unorthodox ways of thinking regarding the development of free technology.
We know that there are so many interesting projects going on in the DIY scene in Berlin  dealing with matter, alterity, interespecifics, psychogeophysics, transcultural, body and so on. We want to approximate these practices with the technoshamanism enviroment.
We have here some ideas:
1-  What is the relationship between Hidden Services and the practices of revelation?
2- What is the connection between matter transformation and subjective transformation?
3- Is it possible to picturean ancesterfuturism without linearity?
4- What means make diy rituals or free cosmogony in a city so transcultural as Berlin?
All kind of practices in free technologies are in our view fundamental – landscape, body knowledge, electronics, healing rituals…
In the afternoon we will make various radio debates, workshops and after 21 pm we will make a  technoshamanist ritual in the basement. (Note, it is not hot inside).
We do it for free, just because we want and can!!
The program is still open!!
If you want to take part in this intense day, you are very welcome.
Carsten Agger (Denmark) – Baobáxia – an internet between quilombos – a community organized effort to preserve the cultural heritage of rural, often off-line and very dispersed Afro-Brazilian communities – and to liberate digital cultural work from the behemoths of Google and Facebook –
Peggy Sylopp (German) – A community of changemakers seeks to create a collaborative workspace for all who are developing innovative solutions linked to the social challenge of migration
16:00 – OPEN SOURCE OBSERVATORIES – Open Source Observatory is a distributed network of sites and gatherings for the observation of satellites, spacecraft and space junk.
Kei Kreutler (EUA – German) will show her research with space laboratorie
18:00 “ritual preparation”
By Fabiane M. Borges, and…
Participants are invited to experience a number of transnarratives and intertextual techniques to produce a fiction whose elements will underlie the DIY ritual.
Here is the call for such an immersive process organized in Rio de Janeiro, Brazil (Capacete):
Documentation of this process:
Obs: 1) The interested needs make registration by email;
2) All the registered must start to write down their own dreams to bring tot the workshop;
3) It is important for us to collaborate with all people and groups who want to work in the ritual.
Performing Languages: use of body techniques, improvisation scene , building individual and collective actions from auto-biographical studies , building expressive languages , ritualization , gestualization , states of presence, among others.
Noisecracy: use of techniques of sound language , noise production ( digital and analog) , vocalization, narrative improvisation , building collective states of listening . Understanding the noise and disruption of communication beyond intelligibility: emission – redundancy – reception. The idea here is to build languages which exist between humans and other species .
Fictional narratives: fictionalization, development of collective writing transnarrative characters, character development, environments, contexts. Interescritures, cosmogonical production, mythical, metaphysical, several ontologies, treaties, free association writing.
space between
1- OHMNOISE Softwar 2016 – A technologic ritual for reactivating your hidden memories and energy – with Markus Schwill – Duration 23 minutes.
2- COLECTIVE IAUZK/U – (this 2016 summer 2nd edition): Viktor Vejvoda,, Gustavo Sanromán, Jana Douda, Ras Damasta) will give psychonautic assistance and astronomical-weird visual projections ,present and perform possibilities of reverse engineering and re-use of old laptop batteries and ways of amateur diagnostics, recharging and use every one can adapt. Recycled source of energy via Li-Ion 18650 battery cells will be used for the whole team in an adaptive-disposable way. We collectivelly will temporarily behave also as open-becomers adapting ourselves to the communal dynamic which will take place
3- MOANA MAYALL – sing, make noises, opera.
She’s gonna use aromatics herbs, local sound massages and other senses manipulations to provide intimate experiences during the day. And some hardcore vocals and dancing at night. Both based on the unknown and caleidoscopic work of Emma Eisenstein, avant-guardist botanist and first music-ethnologist of Europe.
5- VANESSA RAMOS-VELASQUEZ – will bring a tree slice or its imprint containing the traces of its growth rings and sonificate the data extracted in a dendrochronology lab from the 128 rings produced by the tree during its 128 years of living near the Rote Kaserne in Potsdam.
6- ANA CARBIA – Body improvisation, dance, catharsis
Meeting organized by:
Fabiane M. Borges (Brazil) and Carsten Agger (Denmark) in collaboration with TOP eV and Freifunk Berlin.
Technoshamanism network –
Free digital networks and sharing of internet self organized by communities. The Freifunk Network in Berlin and Germany introduces itself, the social and political impact and the technical realization.
Ana Carbia, Dancer, Shiatsu Therapist and Yoga Kundalini Teacher.She began in Buenos Aires studying contemporary dance. Since 90¨she works with Butoh school in Berlin , some of the principles of this dance as seeking the movement from the essential further than coreografia and the magic of the moment, drive her close to the improvisation dance and shamanism. Since begining of 2013 give a workshop about researching movement , sound and voice Since 2015 is a Cofunding of ¨La Siembra“ a group of interactive and improvisation.
Carsten Agger is a software developer, activist and writer who has been active in social movements, for free software and civil rights and against racism and colonial wars, for twenty years. Trained as a theoretical physicist he works as a free software developer, contributes to the Baobáxia project and co-organized the LibreOffice Conference 2015. He wrote a book about the Qur’an and is currently studying Norse religion and language for a comparative project. Carsten is based in Aarhus, Denmark.
Fabiane M. Borges is phd in clinical psychology, essayist and artist. Her research is about Space-art, art, technology, shamanism, performance and subjectivity. Usually she call her own work as “imersive process”.
Freifunk stands for distributing free networks, democratizing media of communication and promoting local social structures. By interconnecting whole neighborhoods, we want to bridge the digital gap and establish a free and independent network infrastructure. More precisely, the aim of Freifunk is installing open wifi networks and interconnecting them. This facilitates free data “air traffic” in the whole city within the Freifunk network. In short, Freifunk is an open, non-commercial, non-hierarchical initiative for free networks.
Kei Kreutler is a researcher, web developer and community facilitator interested in networked practices for nomadic groups and applied autonomous living. She currently contributes to @unMonastery, @OSObservatory, @TransforMap, @2nd__foundation, @SatNOGS, and is Spatial Advisor at Large for the GPA.
Markus Schwill aka Ohmnoise is a Berlin artist. His main focus on these days is the research and manipulating, in a positive way, of the human and universal soul. His experiments and knowledge are integrating into a ritual-performance based on ancient intuition and modern technologies. He calls his performances SOFTWAR, to create an aesthetic acceptable contention in reference to our inner world and the actual daily horror news. He works since 1981 in the fields of wired music, added video and fine arts later on and was a curator for hundreds of events in the Berlin alternative art-scene.
Moana Mayall multiartist from Rio and based in Berlin. At the moment goes through a portal experiment between the Complexo do Alemão (=”German Complex”, biggest favela complex in Rio, Brazil), and the enigmas of the Deutsch Komplex.
Peggy Sylopp She is Dipl. Computer scientist and artist, founder of PexLabs. Her passion lies in the combination of art, technology and science, as with own artistic works or in the design of workshops or campaigns. So she searched for her Public Policy Master’s thesis about the potential of artistic work to pass on technical knowledge. Her installative and performative light arts are based oa.o. on motion detection, 2008, she was nominated together with her partner Giovanni Longo the German Sound Art Award.
Colective IAUZK/U – Kooperativa-Extitution dedicated to Open source, Peer to peer,Social Design,Urban Gardenin, Sharing economy, DIY strategies – (Viktor Vejvoda, Sebastien Mazauric, Gustavo SanRoman) psychonautic assistance and astronomical-weird visual projections, will temporarily behave as a semiograph, drawing combinations of shapes and subtles meanings, under the mixed influence of the ritual and personal obsessions (absurdity, human realm, fantasies, aesthetic and panache),present and perform possibilities of reverse engineering and re-use of old laptop batteries and ways of amateur diagnostics, recharging and use every one can adapt. Recycled source of energy via Li-Ion 18650 battery cells will be used for the whole team in an adaptive-disposable way.
Solene Garnier is a multi-directional performer, combining drama, dance and music in integrating her body and voice with a range of instruments including winds, strings, keyboards and percussion. 2-Solene Garnier is a she-beast who eats raw liver from ducks she strangles with her bare hands while running through the hills. 3-She’s gonna use aromatics herbs, local sound massages and other senses manipulations to provide intimate experiences during the day. And some hardcore vocals and dancing at night. Both based on the unknown and caleidoscopic work of Emma Eisenstein, avant-guardist botanist and first music-ethnologist of Europe.
top e.V. Association for the Promotion of Cultural Practice has been operating in Berlin since June 2002. Our members are artists, researchers and activists, whose activities range from individual research to curating project space and to international collaboration. Our infrastructure supports projects that pursue an interdisciplinary approach, support international exchange or deal with non-commercial attitudes. This includes but not limits to project space, webspace, channels of distribution and networking.
Vanessa Ramos-Velasquez – is media artist and transdisciplinary researcher from Brazil and the United States, where she was a Fulbright scholar in Art & Design and Media Studies. Her artistic practice started with Structural/Materialist image-making via cameraless animation for her experimental films to VJing as part of her performance art and installation pieces. Her most recent work intersects with art, design, culture, science and technology, ranging from sonification of data in dendrochronology, videomicrography and creative coding for immersive interactive performative installations where the public is invited to integrate the assemblages.

FOSSA - Now we need feedback by the real experts

Matthias Kirschner's Web log - fsfe | 02:30, Monday, 11 July 2016

The goal of the "Free and Open Source Security Audit" (FOSSA) pilot project is to increase security of Free Software used by the European institutions. The FSFE has been following the project since the early beginning in 2014. I am concerned that if the project stays on its current course the European Institutions will spent a large part of the 1 Million Euro budget without positive impact on the security of Free Software; and the result will be a set of consultancy reports nobody will ever read. But if we work together and communicate our concerns to the responsible people in the Parliament and the Commission, there might still be a valuable outcome.

The [FOSSA] project should result in a systematic approach for the EU institutions to ensure that widely-used key open source components can be trusted. The project will also enable the EU institutions to contribute to the integrity and security of key open source software. The EU-FOSSA project is managed by the European Commission's Directorate-General for Informatics (DIGIT). It was initiated by the European Parliament. (see Joinup.)

Since the European Parliament approved funding for FOSSA in 2014, the FSFE has been involved in the project. We gave input at the beginning, we suggested interview partners to the Everis – the consultancy company, which the European commission contracted for the project – and we were interviewed by them ourselves.

Unfortunately our experience from the participation leaves us concerned.

Our concerns

The whole analysis of Free Software which is published on Joinup is based on a small sample. They included 14+2 communities. From those they did interviews with only 5 (Apache – tomcat, Drupal, Free Software Foundation [!sic it was FSFE], Libre Office, OWASP Community). They did not contact several people from security teams of Free Software initiatives, whom we recommended and introduced to them.

The interviews which Everis conducted for the European Commission were quite strange: The questions were either very low level, or very academic questions about security. There was also no follow-up after the interviews. They had no good knowledge about Free Software when they started, and although we provided them with lots of information, when looking at the results this did not change.

There was little communication about the project until the recent publication of over 550 pages and a consultation about which program should be audited. In our comments from 8 July 2015 we recommended them to "Release Early, Release Often: Do not produce a huge report and publish this at the end of the project. Publish ideas, criteria, results, and next steps as soon as possible. Thereby you enable others to give you feedback." Publishing 550 pages after almost a year of silence does not allow to comment on misunderstandings and systematic errors they made early in the project – see below for some examples.

In general the projects is missing the step how security problems will be fixed. At the moment it proceeds in three phases: 1. The EU does code reviews 2. They provide the outcome to the Free Software communities by considering "industry standards for submitting code reviews" 3. Then the EU institution's user has more secure software. But this approach does not answer the question how those problems actually get fixed and by whom. For example where does the money and resources for the "communities" come from?

I cannot assess how good the analysis for the European institutions was done by consultancy companies Everis (Spain), KPMG (Italy) and Trasys (Belgium). But the deliverables for the Free Software communities include a lot of wrong factual information and shows a lack of understanding for Free Software.

Factual errors

Those are just some points I found by reading through the deliverables once. My guess is that it will take me at least a week to refute most of the problems in this report line by line. But maybe you can help to point out the most relevant ones, so we can summarise them for the European Commission.

Unfortunately the best one page summary about the Free Software competence of the 568 pages of material the consultants is page 46 with the "License Overview": some things are correct, but many are wrong or do not make sense.

License overview from FOSSA deliverables - full of mistakes

But let us continue with some other examples. In the "Analysis of Software Development Methodologies Used in the FOSS Communities" there are also many errors or missing information which should have been easy to research. For example:

  • Debian often gets "N/A" although we proposed them an interview partner from Debian's security team, and lots of the information could have been researched with public information.
  • "Generate LTS (Long Time Support) Releases" has a "N/A" for Debian and Red Hat.
  • OpenSSL is just used by two (Debian and Red Hat have a "N/A" again)
  • According the the deliverable Red Hat is not using "OpenSSH".
  • GnuPG is marked as not being used by several of them, and "JAVA/JAVA EE" is mentioned as related technology of GnuPG.
  • Java, and PHP are "N/A" for Debian but python is checked. While Red Hat again has a "N/A" for all of them.
  • Why does LibreOffice have so many question marks?:

Lack of understanding

  • In the deliverables they often write that Free Software communities consist of volunteers. It is sometimes mentioned that they are paid by companies, but it is not really considered further in the analysis.

  • It is not even clear if they understood how commercial Free Software works. At least this comment did not show understanding that there is no difference between commercial and non-commercial Free Software, when it comes to licenses: "Free for Open Source: Some communities provide their software as free for open source projects, meaning that projects which are commercial or proprietary in nature have to pay the license of the product."

  • The consultancy company focused on web services. They analysed Debian and Red Hat as if they are specific web tools. E.g. see recommendations like "Use PHP Snippets Sparingly and with Caution" or "SEO Best Practices" for GNU/Linux distributions.

Next steps

I am aware that those comments sound quite pessimistic. But I hope that if we give good feedback to the European Commission and the Parliament, and if they implement it, we might still be able to get some positive results for the security of Free Software. Crucial for that goal is that:

  • The Commission communicates the status in digestible format and enable people to give feedback
  • They publish the deliverables on a platform so people from the Free Software initiatives and companies can make annotations. Or at least publish the deliverables in an editable Open Standard format like ODT, to make it easier to give feedback.
  • The Commission and the consultancy companies include more Free Software experts in the process. Especially the people from Linux Foundation's Core Infrastructure Initiative and the ones from Mozilla's Security Open Source Fund.
  • The consultancy companies have to send short summaries to the projects interviewed so they can check if their summaries about their projects are accurate.

What you can do

As you are the real experts, now we need your feedback. What do you think about the deliverables themselves and what about our comments so far? I am especially interested if you benefit from the recommendations.

Send the feedback to the FSFE's discussion list, to me directly, or blog about it and sent me the link.

Afterwards I will sent an updated feedback to the Commission, and the members of the European Parliament Julia Reda and Max Andersson who initiated the pilot project.

Update 2016-07-11: Mirko Böhm, who was also involved in the project from the beginning, wrote down his comments on the deliverables and encourages you to give feedback, too.

Thursday, 07 July 2016

Can you help with monitoring packages in Debian and Ubuntu? - fsfe | 09:14, Thursday, 07 July 2016

Debian (and consequently Ubuntu) contains a range of extraordinarily useful monitoring packages.

I've been maintaining several of them at a basic level but as more of my time is taken up by free Real-Time Communications software, I haven't been able to follow the latest upstream releases for all of the other packages I maintain. The versions we are distributing now still serve their purpose well, but as some people would like newer versions, I don't want to stand in the way.

Monitoring packages are for everyone. Even if you are a single user or developer with a single desktop or laptop and no servers, you may still find some of these packages useful.

For example, after doing an apt-get upgrade or dist-upgrade, it can be extremely beneficial to look at your logs in LogAnalyzer with all the errors and warnings colour-coded so you can see at a glance whether your upgrade broke anything. If you are testing new software before a release or trying to troubleshoot erratic behavior, this type of colour-coded feedback can also help you focus on possible problems without the eyestrain of tailing a logfile.


How to help

A good first step is simply looking over the packages maintained by the pkg-monitoring group and discovering whether any of them are useful for your own needs.

You may be familiar with alternatives that exist in Debian, if so, feel free to comment on whether you believe any of these packages should be dropped by cre
ating a wishlist bug against the package concerned.

The next step is joining the pkg-monitoring mailing list. If you are a Debian Developer or Debian Maintainer with upload rights already, you can join the group on alioth. If you are not at that level yet, you are still very welcome to test new versions of the packages and upload them on and then join the mentors mailing list to look for a member of the community who can help review your work and sponsor an upload for you.

Each of the packages should have a README.source file in the repository explaining more about how the package is maintained. Familiarity with Git is essential. Note that all of the packages keep their debian/* artifacts in a branch called debian/sid while the master branch tracks the upstream repository.

You can clone the Debian package repositories for any of these projects from alioth and build them yourself, try building packages of new upstream versions and try to investigate any of the bug reports submitted to Debian. Some of the bugs may have already been fixed by upstream changes and can be marked appropriately.

Integrating your monitoring systems

Two particular packages I would like to highlight are ganglia-nagios-bridge and syslog-nagios-bridge. They are not exclusively for Nagios and could also be used with Icinga or other monitoring dashboards. The key benefit of these packages is that all the alerting is consolidated in a single platform, Nagios, which is able to display them in a colour-coded dashboard and intelligently send notifications to people in a manner that is fully configurable. If you haven't integrated your monitoring systems already, these projects provide a simple and lightweight way to start doing so.

Wednesday, 06 July 2016

Avoiding SMS vendor lock-in with SMPP - fsfe | 09:43, Wednesday, 06 July 2016

There is increasing demand for SMS notifications about monitoring alerts, trading notifications, flight delays and other events. Various companies are offering SMS transmission services to meet this demand and many of them aggressively pushing their own proprietary interfaces to the SMS world rather than using the more open and widely supported SMPP.

There is good reason for this: if users write lots of of scripts to access the REST API of an SMS service, the users won't be able to change their service provider without having to change all their code. Well, that is good if you are an SMPP vendor but not if you are their customer. If an SMS gateway company goes out of business or has a system meltdown, the customers linked to their REST API will have a much bigger effort to migrate to a new provider than those using SMPP.

The HTTP REST APIs offered by many vendors hide some details of the SMS protocol and payload. At first glance, this may feel easier. In fact, this leads to unpredictable results when delivering messages to users in different countries and different character sets/languages. It is better to start with SMPP from the beginning and avoid discovering those pitfalls later. The SMS Router free/open source software project helps overcome the SMPP learning curve by using APIs you are already familiar with.

More troublesome for large organizations, some of the REST APIs offered by SMS gateways want to make callbacks to your own servers: this means your servers need public IP addresses accessible from the Internet. In some organizations that can take months to organize. SMPP works over one or more outgoing TCP connections initiated from your own server(s) and doesn't require any callback connections from the SMPP gateway.

SMS Router lets SMS users have the best of both worlds: the flexibility of linking to any provider who supports SMPP and the convenience of putting messages into the system using any of the transports supported by an Apache Camel component. Popular examples include camel-jms (JMS) for Java and J2EE users, STOMP for scripting languages, camel-mail (SMTP and IMAP) for email integration and camel-sip (SIP) or camel-xmpp (XMPP) for chat/instant messaging systems. If you really want to, you can also create your own in-house HTTP REST API too using camel-restlet for internal applications. In all these cases, SMS Router always uses standard SMPP to reach any gateway of your choice.

Architecture overview

SMS Router is based on Apache Camel. Multiple instances can be operated in parallel for clustering, load balancing and high availability. It can be monitored using JMX solutions such as JMXetric.

The SMPP support is based on the camel-smpp component which is in turn based on the jSMPP library, which comprehensively implements the SMPP protocol in Java. camel-smpp can be used with minimal configuration but for those who want to, many SMPP features can be tweaked on a per-message basis using various headers.

The SMPP gateway settings can be configured and changed at will using the file. The process doesn't require any other files or databases at runtime.

The SMS Router is ready-to-run with one queue for sending messages and another queue for incoming messages. The routing logic can be customized by editing the RouteBuilder class to use other Camel components or any of Camel's wide range of functions for inspecting, modifying and routing messages. For example, you can extend it to failover to multiple SMPP gateways using Camel's load-balancer pattern.

SMS Router based projects are already used successfully in production, for example, the user registration mechanism for the Lumicall secure VoIP app for Android.

Getting started

See the README for instructions. Feel free to ask questions about this project on the Camel users mailing list.


SMS is not considered secure, the SMS Router developers and telecommunications industry experts discourage the use of this technology for two-factor authentication. Please see the longer disclaimer in the README file and my earlier blog about SMS logins: an illusion of security. The bottom line: if your application is important enough to need two-factor authentication, do it correctly using smart cards or tokens. There are many popular free software projects based on these full cryptographic solutions, for example, the oath-toolkit and dynalogin.

Monday, 04 July 2016

FSFE summit: Registration open

English Planet – Dreierlei | 14:34, Monday, 04 July 2016

This week is a good week: The registration to the FSFE summit is open! Also you find a first final schedule of our external speakers and all the background information about the idea, the setting and the venue. Please be a bit more patient about our community schedule that is still in process to be finalized.

FSFE summit header

With this blogpost I also like to take the chance to thank the amazing team behind the summit that invested the last weeks of their online-time, coffee-breaks and even their daydreams to offer you an event that is worth for you to come and worth to be the official 15 years of FSFE celebration: pan-European, by the community, old stagers and newcomers, in the heart of Berlin, each day another theme, all talks about technology, freedom and society, social events in the evening, the possibility to meet, share and organise …
and all of this organised in full day schedules, full day catering and also full breaks to network with included entrance to the conferences of Qt Contributors, VideoLAN developers, KDAB and KDE Academy as well …

Special thanks go to Cellini Bedi, who does an awesome job in taking care of our speakers and helping them with their presentation and accommodation. Our translators (of whom I am unsure if they like to be referenced, so I let it be) who translated our summit pages into Dutch, French, German, Italian, Spanish and Albanian. And on this occasion – although the community schedule is not yet published – I can already whistle-blow that there will be a dedicated community sessions for FSFE translators and newcomers at the summit, run by André Ockers. And last but not least Elio Qoshi from ura design who made our beautiful logo and the banner on our summit pages (and header image of this blogpost).


I guess there is no need to clone all the information that you can find on our summit pages. Just a personal tip: Be quick to register and get your ticket, because space is physically limited. If you are a community member, also take your time to read through the registration guide, to make sure you pay the entrance fee that fits you most.

Finally, please help to spread the word:

History and Future of Cloud Federation

English – Björn Schießle's Weblog | 11:22, Monday, 04 July 2016

Federated Cloud Sharing - Connect self-hosted, decentralized clouds

Federated Cloud Sharing – Connect self-hosted, decentralized clouds

I’m now working for about two years on something called Federated Cloud Sharing. It started on June, 23er 2014 with the release of ownCloud 7.0. Back then it was simply called “Server to Server sharing”. During all this years I never wrote about the broader ideas behind this technology, why we do it, what we achieved and where we are going.


The Internet started as a decentralized network, meant to be resilient to disruptions, both due to accidents or malicious activity. This was one of the key factors which made the Internet successful. From the World Wide Web, over IRC, news groups, e-mail to XMPP. Everything was designed as decentralized networks, which is why if you are on the Google servers you can email people at Yahoo. Everybody can set up his own web server, e-mail or chat server and communicate with everyone else. Individuals up to large organisations could easily join the network, participate and build business without barriers. People could experiment with new innovative ideas and nobody had the power to stop them or to slow them down. This was only possible because all underlying technology and protocols were build on both Open Standards and Free Software.

This changed dramatically over the last ten years. Open and inclusive networks were replaced by large centralized services operated by large companies. In order to present yourself or your business in the public it was no longer enough to have your own website, you had to have a page on one or two key platforms. For communication it was no longer enough to have a e-mail address, or be on one of the many IRC or XMPP servers. Instead people expected that you have a account on one of the major communication platforms. This created huge centralized networks, with many problems for privacy, security and self-determination. To talk to everybody, you have to have an account on Facebook, at Google, Skype, Whatsapp, Signal and so on. The centralization also made it quite easy to censor people or manipulate their view by determining the content presented to them. The algorithms behind the Facebook news feed or the “what you missed” in Twitter are very clever — or so we assume, as we don’t know how they work or determine what is important.

The last few years many initiatives started to solve this problem in various ways, for example by developing distributed social networks. I work in the area of liberating people who share and sync all sort of data. We saw the rise of successfully projects such as ownCloud, Pydio and now of course Nextcloud. They all have in common that they built Free Software platforms based to a large extend on Open Standards to allow people to host, edit and share their data without giving up control and privacy. This was a huge step in creating more competition and restoring decentralized structures. But it also had one big drawback. It created many small islands. You could only collaborate with people on the same server, but not with others who run their own server. This leads us to the concept of federated cloud sharing.

Server to Server Sharing

The first version of this ideas was implemented in ownCloud 7.0 as “Server to Server Sharing”. ownCloud already knew the concept of sharing anonymous links with people outside of the server. And, as ownCloud offered both a WebDAV interface and could mount external WebDAV shares, it was possible to manually hook a ownCloud into another ownCloud server. Therefore the first obvious step was to add a “Add to your ownCloud” button to this link shares, allowing people to connect such public links with their cloud by mounting it as a external WebDAV resource.

s2s1 s2s2 Server to Server sharing

Federated Cloud Sharing

Server to server sharing already helped a lot to establish some bridges between many small islands created by the ability to self-host your cloud solution. But it was still not the kind of integration people where used to from the large centralized services and it only worked for ownCloud, not across various open source file sync and share solutions.


The next iteration of this concept introduced what we called a “federated cloud ID”, which looks similar to a e-mail address and, like email, refers to a user on a specific server. This ID could then be used in the normal share dialog to share files with people on a different server!

share dialog - federated cloud id

The way servers communicate with each other in order to share a file with a user on a different server was publicly documented with the goal to create a standardized protocol. To further the protocol and to invite others to implement it we started the Open Cloud Mesh project together with GÉANT, an European research collaboration initiative. Today the protocol is already implemented by ownCloud, Pydio and now Nextcloud. This enables people to seamlessly share and collaborate, no matter if everyone is on the same server or if people run their own cloud server based on one of the three supporting servers.

Trusted Servers

In order to make it easier to find people on other servers we introduced the concept of “trusted servers” as one of our last steps. This allows administrator to define other servers they trust. If two servers trust each other they will sync their user lists. This way the share dialogue can auto-complete not only local users but also users on other trusted servers. The administrator can decide to define the lists of trusted servers manually or allow the server to auto add every other server to which at least one federated share was successfully created. This way it is possible to let your cloud server learn about more and more other servers over time, connect with them and increase the network of trusted servers.


Open Challenges: where we’re taking Federated Cloud Sharing

Of course there are still many areas to improve. For example the way you can discover users on different server to share with them, for which we’re working on a global, shared address book solution. Another point is that at the moment this is limited to sharing files. A logical next step would be to extend this to many other areas like address books, calendars and to real-time text, voice and video communication and we are, of course, planning for that. I will write about this in greater detail in on of my next blogs but if you’re interested in getting involved, you are invited to check out what we’re up to on GitHub and of course, you can contact me any time.

FSFE Chapter Germany e.V. officially dissolved

Matthias Kirschner's Web log - fsfe | 02:23, Monday, 04 July 2016

Last year in September I wrote about the process and reasons of dissolving "FSFE Chapter Germany e.V.". As I was recently contacted by someone in the Free Software community, who also has to dissolve an association, I thought others might also benefit to see the last piece of the puzzle.

After the liquidation of Chapter Germany was announced on 5 May 2015 in the "Amtlicher Anzeiger" Hamburg (PDF, page 16) we had to wait another year to see if anyone thinks we still owe them money. We have been warned that in this period you should be careful, as some cheaters send fake invoices. Luckily nothing like that happened to us so we could transfer all the remaining money to FSFE e.V., the European association.

Last month I was again at the notary to send a final letter to the register court to inform them that everything is completed from our side. A few weeks later we received the final confirmation from their side that "FSFE Chapter Germany e.V." is officially dissolved (see the picture above).

On task less to concentrate on, when solving organisational challanges to reach our mission.

Sunday, 03 July 2016

Debconf streaming and kudos to the video team

Elena ``of Valhalla'' | 13:56, Sunday, 03 July 2016

Debconf streaming and kudos to the video team

With being in South Africa, a lot of people (like me) probably weren't able to attend and are missing the cheese and wine party, mao games and general socialization that is happening there.

One thing we don't have to miss, however, are the talks: as usual the video team is doing a great job recording and all talks so that people can still participate a bit from their home.

What they do, however, requires a lot of manpower, so if you are attending Debconf please consider volunteering to help: from my experience last year they are very nice people who are welcoming towards new contributors and they have periodical training sessions to help people getting started with the various tasks. More informations about video team meetings and training session are in the topic of the IRC channel, #debconf-video@OFTC.

I don't think there are cookies involved (which just proves that the video team isn't evil), but you may get a t-shirt and you will get a warm fuzzy feeling of having helped people around the world.

@Debian #debconf

Friday, 01 July 2016

Other people’s thoughts on “Freedom and security issues on x86 platforms”

Paul Boddie's Free Software-related blog » English | 22:54, Friday, 01 July 2016

A couple of months ago, we had a brief discussion on the FSFE discussion mailing list about the topic of “Uncorrectable freedom and security issues on x86 platforms“, but it just came to my attention that a bunch of other people were discussing our discussion, too. Hacker News is, of course, so very “meta”, but fortunately they got onto discussing the actual topic as well.

The initial message in the original discussion advocated adopting the Power computing architecture as a primary hardware platform for Free Software. Now, the Hacker News participants were surprised that nobody mentioned SPARC and yet I was sure that SPARC did get mentioned in our discussion. A brief search doesn’t find any mention of it, however, and I’m embarrassed to admit that I do know about things like LEON and even used SPARC-based hardware for many years. (The Sun 4 workstations at my university had SPARC CPUs, for instance.)

I suppose the disconnect here involves price, availability and performance of readily-available products. Certainly, a free hardware SPARC implementation can be synthesised for an FPGA, but the previous discussion covered things like RISC-V in a similar fashion: it’s nice to have the ability to deploy a “soft processor” in an FPGA, but customers of computing products usually expect “hard” CPU performance. And you can at least buy ARM and MIPS CPUs, even if they aren’t free hardware implementations, having decent-enough performance which support Free Software from the very bottom of the software stack.

The participants in the meta-discussion wondered why MIPS became so popular given that there are licensing fees involved, whereas Sun made certain SPARC designs available under the GPL, and given that the SPARC architecture is supposedly royalty-free. For some manufacturers, this is asking the wrong question: they did not seek to license the patent-encumbered versions of the MIPS architecture; like the OpenRISC initiative, they merely implemented the unencumbered versions instead.

It would be nice to have a high-performance, inexpensive, readily-available free hardware CPU for use in free hardware designs. And of course those designs would support Free Software completely. But until that comes to pass, we have to work with what we can get. And indeed, for whichever architecture seems to be favoured for such a role, we also need to have usable and accessible hardware that is compatible with our favoured architecture so that we may prepare ourselves for the day it finally gets rolled out.

There might be a reason why SPARC isn’t so well supported by things like GNU/Linux distributions. Sadly, unlike various competitors, inexpensive SPARC products seem to be thin on the ground, and without those the efforts to maintain ports of large Free Software collections inevitably grind to a halt, but I would be only too happy for someone to point me to a source of products that I may have overlooked. There is no inherent reason why SPARC couldn’t be a viable platform for Free Software, regardless of what people may have to say about register windows!

Busy/idle status indicator

Elena ``of Valhalla'' | 16:24, Friday, 01 July 2016

Busy/idle status indicator

About one year ago, during my first, I've felt the need for some way to tell people whether I was busy on my laptop doing stuff that required concentration or just passing some time between talks etc. and available for interruptions, socialization or context switches.

One easily available method of course would have been to ping me on IRC (and then probably go on chatting on it while being in the same room, of course :) ), but I wanted to try something that allowed for less planning and worked even in places with less connectivity.

My first idea was a base laptop sticker with two statuses and then a removable one used to cover the wrong status and point to the correct one, and I still think it would be nice, but having it printed is probably going to be somewhat expensive, so I shelved the project for the time being.


Lately, however, I've been playing with hexagonal stickers and decided to design something on this topic, whith the result in the figure above, with the “hacking” sticker being my first choice, and the “concentrating” alternative probably useful while surrounded by people who may misunderstand the term “hacking”.

While idly looking around for sticker printing prices I realized that it didn't necessarly have to be a sticker and started to consider alternatives.

One format I'm trying is inspired by "do not disturb" door signs: I've used some laminating pouches I already had around which are slightly bigger than credit-card format (but credit-card size would also work of course ) and cut a notch so that they can be attached to the open lid of a laptop.


They seem to fit well on my laptop lid, and apart from a bad tendency to attract every bit of lint in a radius of a few meters the form factor looks good. I'll try to use them at the next conference to see if they actually work for their intended purpose.

SVG sources (and a PDF) are available on my website under the CC-BY-SA license.

Free Software dreams

Elena ``of Valhalla'' | 13:58, Friday, 01 July 2016

Free Software dreams

Tonight I've dreamt I was inside, as a barbarian being invaded by the atlanteans.

I've had the same thing happening to me a few times with Battle for Wesnoth

Mayyybe it is a sign that lately I've been playing it too much, but I'm quite happy with the fact that free software / culture is influencing my dreams.

Thanks to everybody who is involved into Free Culture for creating enough content so that this can happen.

Wednesday, 29 June 2016

Confsl 2016 the Italian Free Software Conference

Alessandro's blog | 07:27, Wednesday, 29 June 2016

Last weekend, from Friday June 24th to Sunday June 26th the Italian FS supporters met in Palermo; we also had John Sullivan from FSF as invited speaker.

A lot happened during the three days, with plenary sessions in the morning and separate tracks in the afternoons. A sad note, however, is the very limited number of attendees. It looks like our communication was not effective and/or there’s something wrong in the approach we take for this conference, now at the 9th edition.

The topic tracks

I attended the track about FS in schools on Friday and the LibreItalia track on Saturday.

Activity in education is vibrant, and the track covered a number of success stories, that can be used as inspiration for others to follow. We went from free text books (Matematica C3, Claudio Carboncini) to a free localized distrubition with special support for disabilities (So.di.Linux, Francesco Fusillo, Lucia Ferlino, Giovanni Caruso). We also had presentations from university and high-school teachers who benefit from using FS in their activities.

LibreItalia is an Italian association rooted in the Document Foundation. The founding and most active members belong to both worlds; the track was led by Sonia Montegiove and Italo Vignoli. After the first presentation, about how LibreOffice is being successfully introduced in school, we switched to English, so the FSF could be part of the discussion, which turned into policy and marketing for FS solutions. A very interesting session I dare say, leveraged by the number of relevant FS people in the room, who brought different points of view to the discussion. One of the most important things I learnt is that we really need a certification program on our technologies. Document Foundation does, and it’s a huge success. I’ve always been against certification as a perverse idea, and I only have my qualification from public education; but I must admit the world works in the exact opposite direction: we need to stamp our hackers with certification marks so they are well accepted by companies and the public administration.

Saturday morning: keynotes

The plenary session was centered on the LibreDifesa project, a migration towards LibreOffice of 150,000 computers within the Italian army. Sonia will update us about this migration at the FSFE summit,
this September in Berlin.

Gen. Sileo, with Sonia Montegiove, presented how the project was born and how it is being developed.
They trained the trainers first, and this multi-layer structure works very well in accompanying people
during the migration from different tools to LibreOffice. The teaching sessions for final users are 3 hours long for Write, 3 hours for Calc, 3 hours for Impress; they prove very useful because users learn how to properly set up a document, whereas most self-learned people ignore the basics of layout and structure, even if they feel proficient with the tool they routinely use. The project, overall, is saving 28 millions of public money.

During the QA session, Sonia also discussed about failed migrations and back-migrations that hit the press recently. Most of them are just a failure (i.e., they back-migrate but do not achieve the expected results), driven by powerful marketing and lobbying forces, that shout about leaving LibreOffice but hide the real story and the real figures.

Later Corrado Tiralongo presented his distribution aimed at professionals, which is meant to be pretty light and includes a few tools that are mandatory for some activities in Italy, like digital signatures and such local stuff.

Finally, Simone Aliprandi stressed the importance of standards: without open standard we can’t have free software, but the pressure towards freeing the standards is very low, and we see problems every day. Simone suggests we need to set up a Free Standards conference, as important as the Free Software conference.

Sunday Morning: FSF and FSFE

The plenary of Sunday morning was John Sullivan and me. John presented FSF and outlined how they are more concerned with users’ freedom than with software. Somebody suggested they could change their name, but we all agree FUF won’t work.

John described the FSF certification mark “Respects your freedom” for devices that are designed to work with Free Software, and the “h-node” web site that lists devices that work well with Free Software drivers. He admits the RYF logo is not the best, as the bell is not recognized by non-US people as a symbol of freedom.

I described what FSFE is, how it is organized. I explained how we are mainly working with decision makers and what we are doing with the European Parliament and Commission. I opened our finance page for full disclosure, and listed names and roles of our employees. Later on, a few people reported they didn’t know exactly what FSFE is doing, and they were happy about our involvement at the upper layers of politics.

Finally, I made a suggestion for a complete change in how confSL is organized and promoted, mainly expanding on what we were discussing on Saturday at dinner time. This part was set up as a “private” discussion, a sort of unrecorded brainstorming, but in the end it was only my brain-dump, as the Q&A part had to be cut because of the usual delays that piled up.

confSL 2017

We hope to be able to set up the next confSL in Pavia, or Milano, with a different design and a special target towards companies and teachers. Also, it will likely be moved to be earlier than June.

Tuesday, 28 June 2016

Fedora 24 released

egnun's blog » FreeSoftware | 04:26, Tuesday, 28 June 2016

As the Fedora Magazine reports the new version 24 of Fedora got released on June 21st. (They should have waited a few more days^^)

To me it seems like quite a normal distro upgrade.
They updated a bunch of software.

Fedora 24 now has the GNOME 3.20 desktop,
Firefox 45
in version 38.7.1
and LibreOffice 5.1.

Notable changes for programmers are the updates of

Python to version 3.5 and of the
to version 2.23.
The GCC is now provided in version 6.
That means that the default mode for  C++ is now -std=gnu++14 instead of -std=gnu++98.
Up to this point nothing really special. But wait! There is more!

Fedora has now added a Astronomy spin!
How cool is that?
Now you can look up to the sky and search for the stars on your computer!
And everything with 100%* Free Software!!

But you don’t just have tools like “Kstars“,
which can accurately simulate the night sky, “from any location on Earth, at any date and time”
you also have software like  the “INDI Library” which “is a cross-platform software designed for automation and control of astronomical instruments.”

As they say on their spins page: “Fedora Astronomy provides a complete set of software, from the observation planning to the final results.”
If you haven’t already downloaded the new Fedora then you should definitely go and check it out.
I have still a few computers left that need to be upgraded.
So I am going to have some fun with the new Fedora in the next days. ;)

If you like Fedora and want to participate, you should check out these sites:


*Well, almost 100%, because Fedora still includes some tiny bits of proprietary firmware.
That’s one of the reasons why it isn’t officially endorsed by the Free Software Foundation.
But you can use Fedora nonetheless, because you probably won’t have any hardware that needs to rely on these blobs.

Monday, 27 June 2016

“Anatomy of a bug fix”

egnun's blog » FreeSoftware | 20:23, Monday, 27 June 2016
There was (or still is) a bug with drawing tablets and Krita under Windows.

This blog post shows how they tried to fix this bug and how they came to the conclusion,

that is was not a problem caused by Krita, but by the proprietary driver that exists since Windows 3.0 (!).

Luckily there are Free Software projects and operating systems , that want us to fix bugs.


DORS/CLUC - an amazing conference

Matthias Kirschner's Web log - fsfe | 04:57, Monday, 27 June 2016

On 11 May I gave the keynote at DORS/CLUC 2016 titled "The long way to empower people to control technology". It was a very good organised conference, and I can just recommend everybody to go there in the next years.

In case you are interested in the talk, feel free to watch the recordings (English, 37 minutes), and give me feedback afterwards.

Again thanks a lot to Svebor Prstačić for inviting me, and Lucijana Dujić Rastić, Krešimir Kroflin, Branko Zečević, Jasna Benčić, Nikola Henezzi, Valerija Olić, Vanja Stojanović, Goran Prodanović, Jure Rastić, Marko Kos, Aleksandar Dukovski, Milan Blažević, Ivan Guštin, Ivana Isadora Devcic, as well as the many other volunteers who made this conference possible, and who thereby enabled others to learn more about software freedom.

Tuesday, 21 June 2016

The Android Learning Curve

David Boddie - Updates (Full Articles) | 22:13, Tuesday, 21 June 2016

I'm learning Android development the hard way by writing my own simple compiler for a Python-like language. Doing this means learning about compilers, the Dalvik virtual machine, the Java language, and both the Android and Java APIs. While doing these things, I'd like to be able to create a few programs that some might call “Apps”.

Unsteady Progress

In terms of creating programs, things have been quite steady for a while. Of course, there have been complex things like matching the behaviour of the compiler's understanding of Java-like templates to what Dalvik expects. Other smaller obstacles still manage to surprise, such as the way I managed to break how the Object class's methods were called by assuming that the methods of any class without a base class are called using the invoke-interface instruction. The Object class isn't an interface, so that wasn't going to work. It's surprising that I only discovered that problem now — still, it was easy to fix.

The latest problem was caused by my attempt to reduce the number of objects to represent temporary variables. Ideally, all objects representing temporary variables occupying the same position on the stack with the same type could be reused. The problem with implementing a caching or factory system for these was that objects to represent variables are created when the variable needs to be allocated and sometimes the variable is allocated before we know its type; this is particularly a problem with empty list and dictionary literals. Ultimately, I decided to allow variable objects to be allocated when needed, but discard redundant ones when they are stored in instruction objects. I made two attempts to solve this problem; the first involved putting the cache at the point of allocation, the second put it at the point where instructions are created.

What About The Apps?

I find it difficult to get going with creating programs because I'm used to writing in Python, and Serpentine is not Python. I'm learning the semantics of my own language (or dialect) while learning the APIs it relies on. The APIs vary between reasonably simple to over-engineered (or over-abstracted) to over-complicated. There are development guides to read, but the API documentation itself is severely lacking in many places. In fact, when choosing some links to show some of the stranger recommendations in the API documentation, I found that the latest set of class pages seem to have dropped almost everything but the inline constant/field/method documentation!

Add to this the problem that even the offline documentation wants to call home to, or even and you're looking at a slow browsing experience unless you block Javascript for those sites. All for the sake of showing the “correct” fonts or having an API-version-sensitive class list. So, slow progress is made, where you feel that the simple things you manage to achieve are hard won.

One of the positive things is that things are made and run and tested. As with many of the embedded platform experiments I've done over the last few years, a lot of the programs I end up making involve graphics of some sort or other. Combined with the need to test things like loops and array handling, one of the obvious non-trivial test programs turned into a simple toy based on a game I wrote many years ago for the Acorn A3000. It's amusing to think that the 1+ GHz processor in my phone is now running a similar game to an 8 MHz ARM-based system from 1989. The original game wasn't all that much fun to play, and the fluid flow thing is a nice effect looking for some gameplay, but it might develop into something.

Other programs have been based around file formats I've written existing Python modules for reading: those for handling Janome Embroidery Format (JEF) and Unified Emulator Format (UEF) files. The code for both of these modules needed some work before the compiler would accept it, and I took the opportunity to read each of these formats a bit differently than I did in the corresponding Python modules, partly out of necessity. It certainly seems to be easier to do file format research with Python than it is in the Java universe.

Rounding Off

I almost accidentally gave this article the title “The Android Learning Curse”, which seems quite appropriate. It's something that it seems I'm stuck with for the time being. Still, when it seems there's so much still to do, it's important to look back and see how far you've come. I'll try to keep writing updates to show things that I've been working on.

EU consultation: Which Free Software program shall receive a European Union’s financed audit?

English Planet – Dreierlei | 16:00, Tuesday, 21 June 2016

<figure class="wp-caption alignright" id="attachment_1482" style="width: 300px">If you like to know your software, look into the code<figcaption class="wp-caption-text">If you like to know your software, look into the code</figcaption></figure>

tl;dr: the The European Union runs a public survey about which Free Software program should receive a financed security audit. Take part!

2014, in reaction to the so-called “heartbleed” bug in OpenSSL, the Parliamentarians Max Andersson and Julia Reda initiated the pilot project “Governance and quality of software code – Auditing of free and open source software”. Which is now managed and realised by the European Commission’s Directorate General of Informatics (DIGIT) as the „Free and Open Source Software Auditing“ (EU-FOSSA) project. FOSSA is aiming at improving the security of those Free Software programs that are in use by the European Commission and the Parliament. To achieve this goal, the FOSSA project has three parts:

  • Comparative study of the European institutions’ and free and open source communities’ software development practices and a feasibility study on how to perform a code review of free and open source projects for European institutions.
  • Definition of a unified methodology to obtain complete inventory of free and open source software and technical specifications used within the European Parliament and the European Commission and the actual collection of data.
  • Sample code review of selected free and open source software and/or library, particularly targeting critical software, whose exploitation could lead to a severe disruption of public or European services and/or to unauthorized access.

In addition, FOSSA states that the “project will help improving the security of open source software in use in the European institutions. Equally important, the EU-FOSSA project is about contributing back to the open source communities. Initially, one million dollar have been assigned to FOSSA.

<figure class="wp-caption alignright" id="attachment_1483" style="width: 169px">Choose your favorite<figcaption class="wp-caption-text">Choose your favorite</figcaption></figure>After its first publication of a comparative study about the development methods and security concerns in 14 open source communities with those of 14 software projects in the European Commission and European Parliament, it is time now for the first code review. On this occasion, the EU started a public survey about which software should be the first to be audited by FOSSA. There is a choice among 18 programs given, but it is also possible to propose another one.

It is to expect that such an audit gives important prominence towards existing and new users of the selected Free Software program. Additionally, in such an audit is a lot of work included. If this is done externally, means that existing developers can better spent their time in improving and further developing the program itself. Finally, every active participant in the survey shows to the Parliament the importance and public reception of FOSSA. And more participation might help in the final evaluation, so that this pilot project might hopefully become institutionalised. Hence, please take part! (just takes 1-4 minutes, no account needed)

This is a translation of my article in (German)

Monday, 20 June 2016

WebRTC and communications projects in GSoC 2016 - fsfe | 15:02, Monday, 20 June 2016

This year a significant number of students are working on RTC-related projects as part of Google Summer of Code, under the umbrella of the Debian Project. You may have already encountered some of them blogging on Planet or participating in mailing lists and IRC.

WebRTC plugins for popular CMS and web frameworks

There are already a range of pseudo-WebRTC plugins available for CMS and blogging platforms like WordPress, unfortunately, many of them are either not releasing all their source code, locking users into their own servers or requiring the users to download potentially untrustworthy browser plugins (also without any source code) to use them.

Mesut is making plugins for genuinely free WebRTC with open standards like SIP. He has recently created the WPCall plugin for WordPress, based on the highly successful DruCall plugin for WebRTC in Drupal.

Keerthana has started creating a similar plugin for MediaWiki.

What is great about these plugins is that they don't require any browser plugins and they work with any server-side SIP infrastructure that you choose. Whether you are routing calls into a call center or simply using them on a personal blog, they are quick and convenient to install. Hopefully they will be made available as packages, like the DruCall packages for Debian and Ubuntu, enabling even faster installation with all dependencies.

Would you like to try running these plugins yourself and provide feedback to the students? Would you like to help deploy them for online communities using Drupal, WordPress or MediaWiki to power their web sites? Please come and discuss them with us in the Free-RTC mailing list.

You can read more about how to run your own SIP proxy for WebRTC in the RTC Quick Start Guide.

Finding all the phone numbers and ham radio callsigns in old emails

Do you have phone numbers and other contact details such as ham radio callsigns in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book?

Jaminy is working on Python scripts to do just that. Her project takes some inspiration from the Telify plugin for Firefox, which detects phone numbers in web pages and converts them to hyperlinks for click-to-dial. The popular libphonenumber from Google, used to format numbers on Android phones, is being used to help normalize any numbers found. If you would like to test the code against your own mailbox and address book, please make contact in the #debian-data channel on IRC.

A truly peer-to-peer alternative to SIP, XMPP and WebRTC

The team at Savoir Faire Linux has been busy building the Ring softphone, a truly peer-to-peer solution based on the OpenDHT distribution hash table technology.

Several students (Simon, Olivier, Nicolas and Alok) are actively collaborating on this project, some of them have been fortunate enough to participate at SFL's offices in Montreal, Canada. These GSoC projects have also provided a great opportunity to raise Debian's profile in Montreal ahead of DebConf17 next year.

Linux Desktop Telepathy framework and reSIProcate

Another group of students, Mateus, Udit and Balram have been busy working on C++ projects involving the Telepathy framework and the reSIProcate SIP stack. Telepathy is the framework behind popular softphones such as GNOME Empathy that are installed by default on the GNU/Linux desktop.

I previously wrote about starting a new SIP-based connection manager for Telepathy based on reSIProcate. Using reSIProcate means more comprehensive support for all the features of SIP, better NAT traversal, IPv6 support, NAPTR support and TLS support. The combined impact of all these features is much greater connectivity and much greater convenience.

The students are extending that work, completing the buddy list functionality, improving error handling and looking at interaction with XMPP.

Streamlining provisioning of SIP accounts

Currently there is some manual effort for each user to take the SIP account settings from their Internet Telephony Service Provider (ITSP) and transpose these into the account settings required by their softphone.

Pranav has been working to close that gap, creating a JAR that can be embedded in Java softphones such as Jitsi, Lumicall and CSipSimple to automate as much of the provisioning process as possible. ITSPs are encouraged to test this client against their services and will be able to add details specific to their service through Github pull requests.

The project also hopes to provide streamlined provisioning mechanisms for privately operated SIP PBXes, such as the Asterisk and FreeSWITCH servers used in small businesses.

Improving SIP support in Apache Camel and the Jitsi softphone

Apache Camel's SIP component and the widely known Jitsi softphone both use the JAIN SIP library for Java.

Nik has been looking at issues faced by SIP users in both projects, adding support for the MESSAGE method in camel-sip and looking at why users sometimes see multiple password prompts for SIP accounts in Jitsi.

If you are trying either of these projects, you are very welcome to come and discuss them on the mailing lists, Camel users and Jitsi users.

GSoC students at DebConf16 and DebConf17 and other events

Many of us have been lucky to meet GSoC students attending DebConf, FOSDEM and other events in the past. From this year, Google now expects the students to complete GSoC before they become eligible for any travel assistance. Some of the students will still be at DebConf16 next month, assisted by the regular travel budget and the diversity funding initiative. Nik and Mesut were already able to travel to Vienna for the recent MiniDebConf /

As mentioned earlier, several of the students and the mentors at Savoir Faire Linux are based in Montreal, Canada, the destination for DebConf17 next year and it is great to see the momentum already building for an event that promises to be very big.

Explore the world of Free Real-Time Communications (RTC)

If you are interesting in knowing more about the Free RTC topic, you may find the following resources helpful:

RTC mentoring team 2016

We have been very fortunate to build a large team of mentors around the RTC-themed projects for 2016. Many of them are first time GSoC mentors and/or new to the Debian community. Some have successfully completed GSoC as students in the past. Each of them brings unique experience and leadership in their domain.

Helping GSoC projects in 2016 and beyond

Not everybody wants to commit to being a dedicated mentor for a GSoC student. In fact, there are many ways to help without being a mentor and many benefits of doing so.

Simply looking out for potential applicants for future rounds of GSoC and referring them to the debian-outreach mailing list or an existing mentor helps ensure we can identify talented students early and design projects around their capabilities and interests.

Testing the projects on an ad-hoc basis, greeting the students at DebConf and reading over the student wikis to find out where they are and introduce them to other developers in their area are all possible ways to help the projects succeed and foster long term engagement.

Google gives Debian a USD $500 grant for each student who completes a project successfully this year. If all 2016 students pass, that is over $10,000 to support Debian's mission.

Sunday, 19 June 2016

The Proprietarization of Android – Google Play Services and Apps

Free Software – | 17:11, Sunday, 19 June 2016

Android has a market share of ~70% for mobile devices which keeps growing. It is based on Linux and the Android Open Source Project (AOSP), so mostly Free Software. The problem is that every device you can buy is additionally stuffed full with proprietary software. These programs and apps are missing the essential four freedoms that all software should have. They serve all but your interests and are typically designed to spy on your every tap.

robotFor this reason, the Free Software Foundation Europe started the Free Your Android campaign in 2012 that shows people how they can install F-Droid to get thousands of apps that respect their freedom. It also shows how it is possible to install alternative versions of Android such as Replicant that are either completely Free Software or in the case of OmniROM, CopperheadOS or CyanogenMod mostly Free Software. Many apps have been liberated and the situation for running (more) free versions of Android has been improved tremendously. You can even buy phones pre-installed with Replicant now.

However, there is an opposite trend as well: The growing proprietarization of Android. There have always been Google-specific proprietary apps that came pre-installed with phones, but many essential apps were free and part of AOSP. Over the past years app after app has been replaced by a non-free pendant. This arstechnica article from 2013 has many examples such as the calender, camera and music app. Even the keyboard has been abandoned and useful features such as swipe-typing are only available if you run non-free software.

GPSWhat Google did with the apps, it does even with basic functions and APIs of the Android operating system. They are being moved into the Google Play Services. These proprietary software libraries are pre-installed on all official Android devices and developers are pushed to use them for their apps. So even if an app is otherwise Free Software, if it includes the Google Play Services, it is non-free and will not run anymore on Android devices that do not have the Play Services installed.

One prominent example of that is the crypto-messenger Signal. It uses the Play Services to receive Push notifications from Firebase Cloud Messaging (formally Google Cloud Messaging) to minimize battery usage, but doesn’t work if the Play services are missing. And that is more common than you might think and does not only affect people running Replicant, but also people who like to use Signal on their Blackberry, Jolla or Amazon device. Like with the proprietary Google apps, you need a license from Google if you want to install the Play Services. To get this license, you need to conform with their guidelines which many manufactures don’t want or can not do.

So the proprietarization of Android effectively cripples other versions of Android and makes them increasingly useless. It strengthens Google’s control over Android and makes life harder for competitors and the Free Software community.

But more control, more users for the own products and thus more revenue are not the only reasons for what Google is doing. Almost all Android devices are not being sold by Google, but by OEMs and these are notoriously bad at updating Android on the devices they’ve already sold leaving gaping security holes open and endangering their customers. Still Google is getting the bad press and increasingly bad reputation for it. Since it can’t force the OEMs to provide updates, it moves more and more parts of Android into external libraries it can silently upgrade itself on people’s devices. Android is just becoming a hollow shell for a Google OS.

Still, millions of people have Android devices already and many more will have Android devices in the future. So I think it is time well spent increasing those people’s freedom. Even though Android is more and more crippled it is still a solid base, the community can built upon without having to do their own mobile operating system from scratch without the resources of a large multi-national company.

Free Your Android

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English―  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog